Ansible - task serial 1 reverse order - ansible

I'd like to create two playbooks, one to stop an environment, another to start it.
Part of the environment is a RabbitMQ cluster, for which stop/start order is quite important, specifically the last node stopped needs to be the first node started.
I was wondering if there is a way to specify a reverse order for running a task against a group.
That way I could apply the stop with serial 1, and the start with serial 1 and reverse group order.
I haven't found a way to do that but to define the rabbitmq host group twice (under different names), in inverted order, which seems a bit distasteful.
Also attempted following:
- hosts: "{ myhostsgroup | sort(reverse=False) }"
serial: 1
And
- hosts: "{ myhostsgroup | reverse }"
serial: 1
But result stays the same, whichever case and its variation (reverse=True, reverse|list) is attempted
Any help would be greatly appreciated.

You can create dynamic groups in runtime:
---
- hosts: localhost
gather_facts: no
tasks:
- add_host:
name: "{{ item }}"
group: forward
with_items: "{{ groups['mygroup'] }}"
- add_host:
name: "{{ item }}"
group: backward
with_items: "{{ groups['mygroup'] | reverse | list }}"
- hosts: forward
gather_facts: no
serial: 1
tasks:
- debug:
- hosts: backward
gather_facts: no
serial: 1
tasks:
- debug:

Related

Getting ansible_play_hosts from previous play?

I have an ansible playbook that interacts with the management card in a bunch of servers, and then produces a report based on that information. Structurally it looks like:
- hosts: all
tasks:
- name: do something with redfish
uri:
...
register: something
- hosts: localhost
tasks:
- name: produce report
template:
...
loop: "{{ SOME_LIST_OF_HOSTS }}"
Originally, the template task in the second was looping over groups.all, but that causes a number of complications if we limited the target hosts using -l on the command line (like ansible-playbook -l only_cluster_a ...). In that case, I would like the template task to loop over only the hosts targeted by the first play. In other words, I want to know ansible_play_hosts_all from the previous play.
This is what I've come up with:
- hosts: all
gather_facts: false
tasks:
- delegate_to: localhost
delegate_facts: true
run_once: true
set_fact:
saved_play_hosts: "{{ ansible_play_hosts_all }}"
...other tasks go here...
- hosts: localhost
gather_facts: false
tasks:
- debug:
msg:
play_hosts: "{{ saved_play_hosts }}"
Is that the best way of doing this?
you could use add_host module: at the end of first play you add a task:
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ saved_play_hosts }}"
and you could trap the value in second play:
- name: second play
hosts: localhost
vars:
play_hosts: "{{ hostvars['variable_holder']['shared_variable'] }}"
tasks:
:
:

Run ansible task only once per each unique fact value

I have a dynamic inventory that assigns a "fact" to each host, called a 'cluster_number'.
The cluster numbers are not known in advance, but there is one or more hosts that are assigned the same number. The inventory has hundreds of hosts and 2-3 dozen unique cluster numbers.
I want to run a task for all hosts in the inventory, however I want to execute it only once per each group of hosts sharing the same 'cluster_number' value. It does not matter which specific host is selected for each group.
I feel like there should be a relatively straight forward way to do this with ansible, but can't figure out how. I've looked at group_by, when, loop, delegate_to etc. But no success yet.
An option would be to
group_by the cluster_number
run_once a loop over cluster numbers
and pick the first host from each group.
For example given the hosts
[test]
test01 cluster_number='1'
test02 cluster_number='1'
test03 cluster_number='1'
test04 cluster_number='1'
test05 cluster_number='1'
test06 cluster_number='2'
test07 cluster_number='2'
test08 cluster_number='2'
test09 cluster_number='3'
test10 cluster_number='3'
[test:vars]
cluster_numbers=['1','2','3']
the following playbook
- hosts: all
gather_facts: no
tasks:
- group_by: key=cluster_{{ cluster_number }}
- debug: var=groups['cluster_{{ item }}'][0]
loop: "{{ cluster_numbers }}"
run_once: true
gives
> ansible-playbook test.yml | grep groups
"groups['cluster_1'][0]": "test01",
"groups['cluster_2'][0]": "test06",
"groups['cluster_3'][0]": "test09",
To execute tasks at the targets include_tasks (instead of debug in the loop above) and delegate_to the target
- set_fact:
my_group: "cluster_{{ item }}"
- command: hostname
delegate_to: "{{ groups[my_group][0] }}"
Note: Collect the list cluster_numbers from the inventory
cluster_numbers: "{{ hostvars|json_query('*.cluster_number')|unique }}"
If you don't mind play logs cluttering, here's a way:
- hosts: all
gather_facts: no
serial: 1
tasks:
- group_by:
key: "single_{{ cluster_number }}"
when: groups['single_'+cluster_number] | default([]) | count == 0
- hosts: single_*
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }}"
serial: 1 is crucial in the first play to reevaluate when statement on for every host.
After first play you'll have N groups for each cluster with only single host in them.

Issue passing variable to Ansible playbook for hosts: with double quotes

I'm trying to write a playbook that kicks off role playbooks and pass a list of hosts to it. The "master" playbook has some load balancing logic in it that I don't want to repeat in every role playbook and can't put into site.yml.
inventory.yml
[webservers]
Web1
Web2
Web3
Web4
master.yml
---
- name: Split Inventory into Odd/Even
hosts: all
gather_facts: false
tasks:
- name: Set Group Odd
set_fact:
group_type: "odd"
when: (inventory_hostname.split(".")[0])[-1] | int is odd
- name: Set Group Even
set_fact:
group_type: "even"
when: (inventory_hostname.split(".")[0])[-1] | int is even
- name: Make new groups "odd" or "even"
group_by:
key: "{{ group_type }}"
- name: Perform Roles on Odd
include: webservers.yml hosts={{ groups['odd'] | join(' ')}}
- name: Perform Roles on Even
include: webservers.yml hosts={{ groups['even'] | join(' ')}}
webservers.yml
- name: Perform Tasks on Webservers
hosts: webservers:&"{{ hosts | replace('\"','')}}"
roles:
- pause
The join(' ') flattens the list of hosts into a string with a space separating each one. When I run the playbook it passes the list of hosts to webservers.yml, however it adds double quotes to the beginning and end, causing webservers.yml to do nothing since no hosts match. I would assume the replace('\"','') would remove the quotes around the string but doesn't seem to be the case. Here's an example output from webservers.yml:
[WARNING]: Could not match supplied host pattern, ignoring: Web4"
[WARNING]: Could not match supplied host pattern, ignoring: "Web2
Any ideas? Does hosts: handle filtering differently?
I feel that you use a role and a play in a wrong way. When you do tasks you should not change list of hosts this task or role is been executed upon. Basically, only play (a thing with 'hosts: ..., tasks: ..., roles: ...') can control where to run.
There are few exceptions, f.e. you can play with delegation and so on. But for your case any attempt to use tasks or roles to control list of the host will only bring misery and hate (toward yourself, toward ansible, etc).
To do it right, just add yet another play in to your playbook (playbook is a list of plays).
Here is your code, slightly modified.
---
- name: Split Inventory into Odd/Even
hosts: all
gather_facts: false
tasks:
- name: Set Group Odd
set_fact:
group_type: "odd"
when: (inventory_hostname.split(".")[0])[-1] | int is odd
- name: Set Group Even
set_fact:
group_type: "even"
when: (inventory_hostname.split(".")[0])[-1] | int is even
- name: Make new groups "odd" or "even"
group_by:
key: "{{ group_type }}"
- name: Doing odd things
hosts: odd
gather_facts: false
tasks:
- name: Perform Roles
include: webservers.yml
- name: Doing even things
hosts: even
gather_facts: false
tasks:
- name: Perform Roles
include: webservers.yml
You can see, I've just assigned a playbook to two groups ('odd' and 'even'). Dynamic groups are preserved between plays in a playbook, and they are no different from any other group in this matter.
P.S. Do not use 'include', use 'import_tasks' (includes are dangerous in newer versions of ansible, avoid them if you could.).

Ansible playbook that generates and shares variable between hosts

My Ansible playbook deploys to both database and webservers and I need to use some shared variables between them. The answer from this question almost gives me what I need:
---
- hosts: all
tasks:
- set_fact: my_global_var='hello'
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
However, in my case the variable is a password that is generated randomly by the playbook on each run and then has to be shared:
---
- hosts: all
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
- name: Set password as fact
set_fact:
my_global_var: "{{ new_password.stdout }}"
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
This above example doesn't work as the password is now re-generated and completely different for each host in the all hosts (unless you coincidentally use the same machine/hostname for your db and web servers).
Ideally I don't want someone to have to remember to pass a good random password in on the command-line using --extra-vars, it should be generated and handled by the playbook.
Is there any suggested mechanism in Ansible for creating variables within a playbook and having it accessible to all hosts within that playbook?
You may want to try to generate pass on localhost and then copy it to every other host:
---
- hosts: localhost
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
- hosts: all
tasks:
- name: Set password as fact
set_fact:
my_global_var: "{{ hostvars['localhost'].new_password.stdout }}"
- hosts: db
tasks:
- debug: msg={{my_global_var}}
- hosts: web
tasks:
- debug: msg={{my_global_var}}
I know this is an old question but I settled on an alternative method that combines the two answers provided here and this issue, by using an implicit localhost reference and doing everything within the same play. I think it's a bit more elegant. Tested with 2.8.4.
This is the working solution in my context, where I wanted a common timestamped backup directory across all my hosts, for later restore:
---
tasks:
- name: Set local fact for the universal backup string
set_fact:
thisHostTimestamp: "{{ ansible_date_time.iso8601 }}"
delegate_to: localhost
delegate_facts: true
- name: Distribute backup datestring to all hosts in group
set_fact:
backupsTimeString: "{{ hostvars['localhost']['thisHostTimestamp'] }}"
I believe this should translate to the OP's original example like this, but I have not tested it:
---
- hosts: all
tasks:
- name: Generate new password
shell: "tr -dc _[:alnum:] < /dev/urandom | head -c${1:-20}"
register: new_password
delegate_to: localhost
delegate_facts: true
- name: Set password as fact
set_fact:
my_global_var: "{{ hostvars['localhost'].new_password.stdout }}"

If set_fact is scoped to a host, can I use 'dummy' host as a global variable map?

I have defined two group of hosts: wmaster and wnodes. Each group runs in its play:
- hosts: wmaster
roles:
- all
- swarm-mode
vars:
- swarm_master: true
- hosts: wnodes
roles:
- all
- swarm-mode
I use host variables (swarm_master) to define different behavior of some role.
Now, my first playbook performs some initialization and I need to share data with the nodes. What I did is to use set_fact in first play, and then to lookup in the second play:
- set_fact:
docker_worker_token: "{{ hostvars[smarm_master_ip].foo }}"
I don't like using the swarm_master_ip. How about to add a dummy host: global with e.g. address 1.1.1.1 that does not get any role, and serves just for holding the global facts/variables?
If you're using Ansible 2 then you can take use of delegate_facts during your first play:
- name: set fact on swarm nodes
set_fact: docker_worker_token="{{ some_var }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ groups['wnodes'] }}"
This should delegate the set_fact task to every host in the wnodes group and will also delegate the resulting fact to those hosts as well instead of setting the fact on the inventory host currently being targeted by the first play.
How about to add a dummy host: global
I have actually found this suggestion to be quite useful in certain circumstances.
---
- hosts: my_server
tasks:
# create server_fact somehow
- add_host:
name: global
my_server_fact: "{{ server_fact }}"
- hosts: host_group
tasks:
- debug: var=hostvars['global']['my_server_fact']

Resources