The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO
Related
We have inherited quite a lot of ansible playbooks and roles/tasks underneath
One of the ansible playbook is triggered by a shell script after setting few variables
ansible-playbook -i inventory.yml install_multiple_nodes.yml
In the install_multiple_nodes.yml, it contains as following
- hosts: all_nodes
run_once: true
roles:
- set_keys
- download_rpms
become: true
- hosts: node1
roles:
- install_rpms_node1
- custom_actions_node1
become_user: node1_user
- hosts: node2
roles:
- install_rpms_node2
- custom_actions_node2
become_user: node2_user
... This continues for multiple nodes ...
The entire playbook takes about 1.5hours to run.
But many of the 'plays' within the playbook can be run in parallel. For instance in the above snippet, the "node1" and "node2" can be run in parallel, but later down in the chain some have to wait for another etc.
Is there a way we can
put a flag to say the node1 and node2 can run in parallel?
What's the best practice to have these type of playbooks? i.e. parallellise options and put dependency after a set of 'plays' , start next etc.
Following my comment, one thing you can try to work around the poor design:
- hosts: all_nodes
run_once: true
roles:
- set_keys
- download_rpms
become: true
tasks:
- name: Run machine specific roles
ansible.builtin.include_role:
name: "{{ item }}_{{ inventory_hostname }}"
loop:
- install_rpms
- custom_actions
Notes:
Untested, see in you own envitonment
This will surely break as soon as there is no specific role for a given node. Add conditions/tests if this is the case.
I have the following set up for Ansible, and I would like to parameterize a filter that will loop, and filter out specific hosts.
- name: run on hosts
hosts: "{{ item }}"
roles:
- directory/role-name
with_items:
- us-east-1a
- us-east-1b
- us-east-1c
The result would be that the role called role-name would be first run on us-east-1a hosts, then us-east-1b... etc.
The above simple errors out with
ERROR! 'with_items' is not a valid attribute for a Play
Is there a way to accomplish what I am trying to do, which is chunking my host list into groups, and running the same role against them, one at a time?
The following achieves the result I am looking for, but is clunky, and not dynamic in length.
- name: run on us-east-1a
hosts: "us-east-1a"
roles:
- my-role
- name: run on us-east-1b
hosts: "us-east-1b"
roles:
- my-role
- name: run on us-east-1c
hosts: "us-east-1c"
roles:
- my-role
I think the only way to (1) have a common code and (2) serialise play execution per group of hosts (with targets inside a group running in parallel) would be to split your playbook into two:
playbook-main.yml
---
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1a
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1b
- import_playbook: playbook-sub.yml
vars:
host_group_to_run: us-east-1c
playbook-sub.yml
- hosts: "{{ host_group_to_run }}"
roles:
- my-role
# other common code
If you wanted to serialise per host, then there is a serial declaration that might be used in conjunction with this suggestion, but despite your comments and edit, it's unclear because once you refer to us-east-1a as a "host" in singular form, other times as a "group of hosts" or an "availability zone".
Will host patterns do the job?:
- name: run on us-east-1a
hosts: us-east-1a,us-east-1b,us-east-1c
roles:
- my-role
Update: #techraf has opened my eyes with his comment – host pattern alone will not do the job.
It will just concatenate all hosts from all groups.
But in a predictable way, which in some cases can be used to iterate hosts in every group separately.
See this answer for details.
I'm trying to look for a text pattern in a load balancer host from a worker host, using the following:
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "$ENVIRONMENT_VARIABLE/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable
The problem is that I can't found any way to make $ENVIRONMENT_VARIABLE (this variable exists in the load-balancer-host) available for the play (it contains the directory, in load-balancer-host, from where I want to look for). ansible_env is only available for the workers but not for the load-balancer-host
I have tried...
- name: A play
hosts: workers
tasks:
- name: set fact
set_fact:
env_var: "{{ lookup('env', 'ENVIRONMENT_VARIABLE') }}"
delegate_to: load-balancer-host
- name: debug
debug:
msg: "{{ env_var }}"
... too, but it prints an empty string.
For users running Ansible 1.x, see kfreezy's answer.
For users running Ansible 2.x, I have found the following solution:
- hosts: workers
tasks:
- name: gather facts from lb
setup:
delegate_to: load-balancer-host
delegate_facts: false
This task will make $ENVIRONMENT_VARIABLE available in every worker ansible_env var. If you want to make $ENVIRONMENT_VARIABLE available in the load-balancer-host ansible_env, just set delegate_facts to True.
More info in ansible docs
Personally I would simplify your playbook by either adding the $ENVIRONMENT_VARIABLE as a variable in Ansible (probably in the host_vars for load-balancer-host) or running a play against load-balancer-host rather than use delegate_to. It might not make sense depending on what the other tasks are.
Here's a direct answer to your question though.
load-balancer-host's ansible_env will only be defined when the host is included in the playbook. You can add another play against the 'load-balancer-host' that will just gather facts. Then you can reference the facts from 'load-balancer-host' using hostvars in your subsequent plays against 'workers'. He's what it would look like.
- hosts: load-balancer-host
tasks:
- name: print debug message
debug:
msg: "this play is for gathering facts on the LB"
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "{{ hostvars['load-balancer-host'].ansible_env.ENVIRONMENT_VARIABLE }}/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable
- name: "API"
hosts: api
vars:
platform: "{{ application.api }}"
vars_files:
- vars/application-vars.yml
tasks:
- include: tasks/application-install.yml
- name: "JOBS"
hosts: jobs
vars:
platform: "{{ application.jobs }}"
vars_files:
- vars/application-vars.yml
tasks:
- include: tasks/application-install.yml
playbook like before described, can I execute this difference tasks on difference hosts in the same time as parallel way?
No sure what do you actually want, but I'd combine it into single play:
- hosts: api:jobs
tasks:
- include: tasks/application-install.yml
And add group vars to inventory:
[api:vars]
platform="{{ application.api }}"
[jobs:vars]
platform="{{ application.jobs }}"
This way you can run your playbook on all hosts at once, and also you can use --limit option to choose api or jobs group only.
you can run more playbooks using "ansible-playbook [OPTIONS] *.yml" command. This will execute all the playbooks NOT IN PARALLEL WAY, but in serial way, so first one playbook and after the execution, another playbook. This command can be helpful if you have many playbooks.
I don't know a way to execute more playbooks in parallel.
I know that one can add host with the following task:
- name: Add new instance to host group
add_host:
hostname: '{{ item.public_ip }}'
groupname: "tag_Name_api_production"
with_items: ec2.instances
But I can't seem to find a way to remove a host from inventory. Is there any way to do this?
Unfortunately, it seems, that you can't do this using Ansible 2. There is no such a module called remove_host or another one.
However, using Ansible 2 you can refresh your inventory mid-play:
- meta: refresh_inventory
Have a look at this question
Another idea might be to filter hosts beforehand. Try adding them to group, and then excluding this group in a play lately, e.g. :
- hosts: '!databases'
You can just stop the play for those hosts:
- name: Remove unwanted hosts from play_hosts
meta: end_host
when: unwanted
This assumes, of course, that the unwanted variable exists on all the hosts and is set properly.
Can not removing hosts, but can choose to run on the new created group.
- name: kubectl
hosts: localhost
gather_facts: false
tasks:
- add_host:
name: nextcloud
ansible_connection: kubectl
ansible_kubectl_context: cluster
ansible_kubectl_namespace: default
ansible_kubectl_pod: nextcloud-75fc7f5c6f-hxrq6
groupname: "pods"
# only run on pods
- hosts: pods
gather_facts: false
tasks:
- raw: pwd
register: raw_result
- debug:
msg: "{{raw_result.stdout_lines[0]}}"
We typically do it like this in a playbook using multiple hosts: sections.
- hosts: auth:!ocp
roles:
- ntp-server
- hosts: all:!auth:!ocp
roles:
- ntp-client
This will remove the groups of hosts from consideration via the !group mechanism. Specifically here in the 1st block we're removing the !ocp group and in the 2nd we're removing both the !auth and !ocp groups.
References
https://docs.ansible.com/ansible/latest/user_guide/intro_patterns.html
I just got the same problem. I have tests which run converge for a given playbook (which I can't modify), and then I need to run the same playbook with smaller set of hosts in the group.
My solution is:
Let's say you have group target you want to make smaller.
Put smallest number of hosts in the target group.
Put additional hosts into target_addon group.
- hosts: target_addon
tasks:
- add_host:
name: '{{ inventory_hostname }}'
groups: [target]
- import_playbook: converge.yaml # uses group `target` with added hosts
# Removing hosts added from target_addon group from group target by reloading inventory
- hosts: localhost
tasks:
- meta: refresh_inventory
- import_playbook: converge.yaml # uses group `target` without added hosts