Ansible: run certain yaml tasks file for all hosts - ansible

I try to run certain yaml tasks file for all hosts, as follows (main.yml):
- name: prepare nodes
include_tasks: node.yml node="{{ item }}"
loop: "{{ groups['all'] }}"
node.yml:
- block:
- name: Task 1...
...
- name: Task 100...
delegate_to: "{{ node }}"
However I get this error: Invalid options for include_tasks: node. I think it used to work in this manner. Anyway I tried to move loop from main.yml into node.yml (right after delegate_to). I also tried to skip node="{{ item }}" part. But I always get errors.
What is the proper way to apply a task file to several hosts within a role?

It should work if you put your node variable under vars then loop.
- name: include tasks
include_tasks: node.yml
vars:
node: '{{ item }}'
loop: "{{ groups['all'] }}"
Above code is working.

A play runs on the hosts you specified. You can run certain tasks on a subset of nodes using when.
But you can have multiple plays in a playbook. So you need to specify a play with hosts: all where you run the tasks you want to run everywhere and another one which runs the rest of the tasks.
Your playbook could look like this:
---
# This is a play
- name: run on all
hosts: all
vars:
somevar: 'test'
tasks:
- name: prepare nodes
include_tasks: node.yml
# This is another play
- name: run on group
hosts: hostgroup
vars:
somevar: 'example'
tasks:
- debug:
msg: 'This runs on all hosts in hostgroup'
# Both plays are in the same playbook

Related

Getting ansible_play_hosts from previous play?

I have an ansible playbook that interacts with the management card in a bunch of servers, and then produces a report based on that information. Structurally it looks like:
- hosts: all
tasks:
- name: do something with redfish
uri:
...
register: something
- hosts: localhost
tasks:
- name: produce report
template:
...
loop: "{{ SOME_LIST_OF_HOSTS }}"
Originally, the template task in the second was looping over groups.all, but that causes a number of complications if we limited the target hosts using -l on the command line (like ansible-playbook -l only_cluster_a ...). In that case, I would like the template task to loop over only the hosts targeted by the first play. In other words, I want to know ansible_play_hosts_all from the previous play.
This is what I've come up with:
- hosts: all
gather_facts: false
tasks:
- delegate_to: localhost
delegate_facts: true
run_once: true
set_fact:
saved_play_hosts: "{{ ansible_play_hosts_all }}"
...other tasks go here...
- hosts: localhost
gather_facts: false
tasks:
- debug:
msg:
play_hosts: "{{ saved_play_hosts }}"
Is that the best way of doing this?
you could use add_host module: at the end of first play you add a task:
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ saved_play_hosts }}"
and you could trap the value in second play:
- name: second play
hosts: localhost
vars:
play_hosts: "{{ hostvars['variable_holder']['shared_variable'] }}"
tasks:
:
:

Run tasks on different hosts within an imported task

The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO

issue while including another playbook in ansible?

I have written a playbook named as master.yaml as defined below
- hosts: master
remote_user: "{{ ansible_user }}"
tasks:
- name: Get env
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
vars:
is_done: "false"
- include: slave.yaml
vars:
sethostname: "{{ group_user }}"
worker: worker
when: is_done == "true"
where: inventory_hostname in groups['worker']
I am trying run another playbook named as slave.yaml as defined below, after a certain condition is met.
- hosts: worker
remote_user: "{{ ansible_user }}"
tasks:
- name: Write to a file for deamon setup
copy:
content: "{{ sethostname }}"
dest: "/home/ubuntu/test.text"
Now i have two question:
I am not able to set the value of var isDone. slave.yaml should
only work when isDone is true.
2.How salve.yaml access the value worker. I have defined group worker in inventory.yaml
I do not know if it's the right way to go to reach your objective. However I tried to make this playbook work by keeping as much as possible your logic. Hope it helps.
The point is that you cannot use import_playbook inside the play. Check the module documentation for more information.
So I propose to share code with a role instead of a playbook. You will be able to share the slave role between the master playbook and another playbooks, a slave playbook for example.
The ansible folder structure is the following.
├── hosts
├── master.yml
└── roles
└── slave
└── tasks
└── main.yml
Master.yml
---
- name: 'Master Playbook'
# Using the serial keyword to run the playbook for each host one by one
hosts: master
serial: 1
remote_user: "{{ ansible_user }}"
tasks:
- name: 'Get env'
command: id -g -n {{ lookup('env', '$USER') }}
register: group_user
- name: 'Calling the slave role'
import_role:
name: 'slave'
# The return value of the command is stored in stdout
vars:
sethostname: "{{ group_user.stdout }}"
# Only run when the task get env has been done (state changed)
when: group_user.changed
# Delegate the call to the worker host(s) -> don't know if it's the expected behavior
delegate_to: 'worker'
Slave main.yml
---
- name: 'Write to a file for deamon setup'
copy:
content: "{{ sethostname }}"
dest: "/tmp/test.text"
At the end the /tmp/test.text contains the effective user group name.

Run ansible task only once per each unique fact value

I have a dynamic inventory that assigns a "fact" to each host, called a 'cluster_number'.
The cluster numbers are not known in advance, but there is one or more hosts that are assigned the same number. The inventory has hundreds of hosts and 2-3 dozen unique cluster numbers.
I want to run a task for all hosts in the inventory, however I want to execute it only once per each group of hosts sharing the same 'cluster_number' value. It does not matter which specific host is selected for each group.
I feel like there should be a relatively straight forward way to do this with ansible, but can't figure out how. I've looked at group_by, when, loop, delegate_to etc. But no success yet.
An option would be to
group_by the cluster_number
run_once a loop over cluster numbers
and pick the first host from each group.
For example given the hosts
[test]
test01 cluster_number='1'
test02 cluster_number='1'
test03 cluster_number='1'
test04 cluster_number='1'
test05 cluster_number='1'
test06 cluster_number='2'
test07 cluster_number='2'
test08 cluster_number='2'
test09 cluster_number='3'
test10 cluster_number='3'
[test:vars]
cluster_numbers=['1','2','3']
the following playbook
- hosts: all
gather_facts: no
tasks:
- group_by: key=cluster_{{ cluster_number }}
- debug: var=groups['cluster_{{ item }}'][0]
loop: "{{ cluster_numbers }}"
run_once: true
gives
> ansible-playbook test.yml | grep groups
"groups['cluster_1'][0]": "test01",
"groups['cluster_2'][0]": "test06",
"groups['cluster_3'][0]": "test09",
To execute tasks at the targets include_tasks (instead of debug in the loop above) and delegate_to the target
- set_fact:
my_group: "cluster_{{ item }}"
- command: hostname
delegate_to: "{{ groups[my_group][0] }}"
Note: Collect the list cluster_numbers from the inventory
cluster_numbers: "{{ hostvars|json_query('*.cluster_number')|unique }}"
If you don't mind play logs cluttering, here's a way:
- hosts: all
gather_facts: no
serial: 1
tasks:
- group_by:
key: "single_{{ cluster_number }}"
when: groups['single_'+cluster_number] | default([]) | count == 0
- hosts: single_*
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }}"
serial: 1 is crucial in the first play to reevaluate when statement on for every host.
After first play you'll have N groups for each cluster with only single host in them.

Ansible - use delegate_to and access environment variable from delegated host

I'm trying to look for a text pattern in a load balancer host from a worker host, using the following:
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "$ENVIRONMENT_VARIABLE/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable
The problem is that I can't found any way to make $ENVIRONMENT_VARIABLE (this variable exists in the load-balancer-host) available for the play (it contains the directory, in load-balancer-host, from where I want to look for). ansible_env is only available for the workers but not for the load-balancer-host
I have tried...
- name: A play
hosts: workers
tasks:
- name: set fact
set_fact:
env_var: "{{ lookup('env', 'ENVIRONMENT_VARIABLE') }}"
delegate_to: load-balancer-host
- name: debug
debug:
msg: "{{ env_var }}"
... too, but it prints an empty string.
For users running Ansible 1.x, see kfreezy's answer.
For users running Ansible 2.x, I have found the following solution:
- hosts: workers
tasks:
- name: gather facts from lb
setup:
delegate_to: load-balancer-host
delegate_facts: false
This task will make $ENVIRONMENT_VARIABLE available in every worker ansible_env var. If you want to make $ENVIRONMENT_VARIABLE available in the load-balancer-host ansible_env, just set delegate_facts to True.
More info in ansible docs
Personally I would simplify your playbook by either adding the $ENVIRONMENT_VARIABLE as a variable in Ansible (probably in the host_vars for load-balancer-host) or running a play against load-balancer-host rather than use delegate_to. It might not make sense depending on what the other tasks are.
Here's a direct answer to your question though.
load-balancer-host's ansible_env will only be defined when the host is included in the playbook. You can add another play against the 'load-balancer-host' that will just gather facts. Then you can reference the facts from 'load-balancer-host' using hostvars in your subsequent plays against 'workers'. He's what it would look like.
- hosts: load-balancer-host
tasks:
- name: print debug message
debug:
msg: "this play is for gathering facts on the LB"
- name: A play
hosts: workers
tasks:
- name: Look for text pattern in delegated host
delegate_to: load-balancer-host
find:
paths: "{{ hostvars['load-balancer-host'].ansible_env.ENVIRONMENT_VARIABLE }}/subdir"
file_type: file
patterns: file.pattern
contains: 'text pattern'
register: aVariable

Resources