Ansible playbook target patterns - ansible

I would like to ask you if it is possible to filter the inventory file, using patterns related to variables inside the inventory itself.
For example:
I have this inventory
groupA:
hosts:
host1A:
env: pre
host2A:
env: prod
groupB:
hosts:
host1B:
env: pre
host2B:
env: prod
and this is the plabook:
- name: test
hosts: **only hosts that contain env:pre??**
role: backend
Is it possible to achieve that? Maybe using a REG expression?
Please help me out.
Thanks in advance.

Given the inventory
shell> cat hosts
groupA:
hosts:
host1A:
env: pre
host1B:
env: prod
groupB:
hosts:
host1B:
env: pre
host1C:
env: prod
The simplest option is using module group_by. For example,
- hosts: all
gather_facts: false
tasks:
- group_by:
key: group_{{ env }}
- debug:
var: groups
run_once: true
gives
PLAY [all] ************************************************************************************
TASK [group_by] *******************************************************************************
changed: [host1A]
changed: [host1B]
changed: [host1C]
TASK [debug] **********************************************************************************
ok: [host1A] =>
groups:
all:
- host1A
- host1B
- host1C
groupA:
- host1A
- host1B
groupB:
- host1B
- host1C
group_pre:
- host1A
- host1B
group_prod:
- host1C
ungrouped: []
PLAY RECAP ************************************************************************************
host1A: ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1B: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1C: ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
This is probably not what you want. The host host1B is not a member of the prod group because the variable env evaluates to 'pre'.
You can easily fix it by putting all options into a comma-separated list. For example,
shell> cat hosts
groupA:
hosts:
host1A:
env: pre
host1B:
env: prod,pre
groupB:
hosts:
host1B:
host1C:
env: prod
and the playbook
- hosts: all
gather_facts: false
tasks:
- group_by:
key: "group_{{ item }}"
loop: "{{ env.split(',') }}"
- debug:
var: groups
run_once: true
gives what you want
PLAY [all] ************************************************************************************
TASK [group_by] *******************************************************************************
ok: [host1A] => (item=pre)
ok: [host1B] => (item=prod)
ok: [host1C] => (item=prod)
ok: [host1B] => (item=pre)
TASK [debug] **********************************************************************************
ok: [host1A] =>
groups:
all:
- host1A
- host1B
- host1C
groupA:
- host1A
- host1B
groupB:
- host1B
- host1C
group_pre:
- host1A
- host1B
group_prod:
- host1C
- host1B
ungrouped: []
PLAY RECAP ************************************************************************************
host1A: ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1B: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host1C: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Related

How to make a variable available across multiple ansible playbooks?

In my first playbook, I am asking user for a value and storing in a variable. I would like that variable to be accessible in other playbooks. There is only one host in the inventory btw.
My first playbook:
---
- name: Get the name of the city from the user
hosts: all
gather_facts: yes
vars_prompt:
- name: my_city
prompt: "Enter the name of city: "
private: no
tasks:
- name: Set fact for city
set_fact:
city: "{{ my_city }}"
cacheable: yes
In another playbook, when I try to print the variable I set in the previous one, I get an error:
---
- name: Print a fact
hosts: all
gather_facts: yes
tasks:
- name: Print ansible_facts['city'] variable
debug:
msg: "Value of city variable is {{ ansible_facts['city'] }}"
Error:
fatal: [testing]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'city'\n\nThe error appears to be in '/home/user/ansible_tasks/print_fact.yml': line 6, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Print ansible_facts['city'] variable\n ^ here\n"}
Enable the cache if you want to use it. For example,
shell> grep fact_caching ansible.cfg
fact_caching = jsonfile
fact_caching_connection = /tmp/ansible_cache
fact_caching_prefix = ansible_facts_
fact_caching_timeout = 86400
Then the playbook below
shell> cat pb1.yml
- hosts: localhost
gather_facts: false
tasks:
- set_fact:
city: my_city
cacheable: true
will store the variable city in the cache
shell> cat /tmp/ansible_cache/ansible_facts_localhost
{
"city": "my_city"
}
The next playbook
shell> cat pb2.yml
- hosts: localhost
gather_facts: false
tasks:
- debug:
var: city
will read the cache
shell> ansible-playbook pb2.yml
PLAY [localhost] *****************************************************************************
TASK [debug] *********************************************************************************
ok: [localhost] =>
city: my_city
PLAY RECAP ***********************************************************************************
localhost: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If you want to cache the same variable in multiple hosts, for example
shell> cat hosts
host_1
host_2
host_3
it is sufficient to run the module set_fact once. The playbook
shell> cat pb3.yml
- hosts: all
gather_facts: false
tasks:
- set_fact:
city: my_city
cacheable: true
run_once: true
will store the variable city in the cache of all hosts
shell> grep -r city /tmp/ansible_cache/
/tmp/ansible_cache/ansible_facts_host_3: "city": "my_city"
/tmp/ansible_cache/ansible_facts_host_1: "city": "my_city"
/tmp/ansible_cache/ansible_facts_host_2: "city": "my_city"
The next playbook
shell> cat pb4.yml
- hosts: all
gather_facts: false
tasks:
- debug:
var: city
will read the cache
shell> ansible-playbook pb4.yml
PLAY [all] ***********************************************************************************
TASK [debug] *********************************************************************************
ok: [host_1] =>
city: my_city
ok: [host_2] =>
city: my_city
ok: [host_3] =>
city: my_city
PLAY RECAP ***********************************************************************************
host_1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host_2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host_3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Ansible how to select hosts based on certain attributes and use their ip addresses to create a list at runtime

I have a specific question about data manipulation in ansible.
In my inventory file, I have a group called postgresql as below:
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_cluster_port=5432 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host3 ansible_host=3.3.3.3 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
host4 ansible_host=4.4.4.4 postgresql_cluster_port=5432 postgresql_harole=slave postgresql_master_ip=1.1.1.1
Somewhere in my playbook I will need to manipulate and use filters to make a list of ip addresses of all hosts whose postgresql_harole=slave as below:
- hosts: postgresql
gather_facts: True
remote_user: root
tasks:
- set_facts:
slave_ip_list: "{{ expressions }}"
I am pulling my hairs to have the correct expressions... any help is highly appreciated!!!!
Q: "List of ip addresses of all hosts whose postgresql_harole=slave"
A: Select the attribute from hostvars, e.g.
- set_fact:
slave_ip_list: "{{ hostvars|dict2items|
selectattr('value.postgresql_harole', 'eq', 'slave')|
map(attribute='value.ansible_host')|
list }}"
run_once: true
gives
slave_ip_list:
- 2.2.2.2
- 3.3.3.3
- 4.4.4.4
Select the hostvars for the group postgresql first if there are hostvars for other hosts, e.g. as a result of - hosts: all. The task below gives the same result
- set_fact:
slave_ip_list: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'slave')|
map(attribute='ansible_host')|
list }}"
run_once: true
Update
You can simplify both the code and inventory. Put the declarations into the group_vars. For example,
shell> cat group_vars/postgresql
postgresql_cluster_port: 5432
postgresql_master_ip: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'master')|
map(attribute='ansible_host')|first }}"
postgresql_slave_ip: "{{ groups.postgresql|
map('extract', hostvars)|
selectattr('postgresql_harole', 'eq', 'slave')|
map(attribute='ansible_host')|list }}"
Then, you can remove postgresql_master_ip and postgresql_cluster_port from the inventory
shell> cat hosts
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_harole=slave
host3 ansible_host=3.3.3.3 postgresql_harole=slave
host4 ansible_host=4.4.4.4 postgresql_harole=slave
The playbook
- hosts: postgresql
gather_facts: false
tasks:
- debug:
msg: |
ansible_host: {{ ansible_host }}
postgresql_harole: {{ postgresql_harole }}
postgresql_master_ip: {{ postgresql_master_ip }}
postgresql_cluster_port: {{ postgresql_cluster_port }}
postgresql_slave_ip: {{ postgresql_slave_ip }}
gives
shell> ansible-playbook pb.yml
PLAY [postgresql] ****************************************************************************
TASK [debug] *********************************************************************************
ok: [host1] =>
msg: |-
ansible_host: 1.1.1.1
postgresql_harole: master
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host2] =>
msg: |-
ansible_host: 2.2.2.2
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host4] =>
msg: |-
ansible_host: 4.4.4.4
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
ok: [host3] =>
msg: |-
ansible_host: 3.3.3.3
postgresql_harole: slave
postgresql_master_ip: 1.1.1.1
postgresql_cluster_port: 5432
postgresql_slave_ip: ['2.2.2.2', '3.3.3.3', '4.4.4.4']
PLAY RECAP ***********************************************************************************
host1: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4: ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The easiest solution is probably to use Ansible's group_by module, which allows you to create groups dynamically based on the values of host variables. For example, given this inventory:
[postgresql]
host1 ansible_host=1.1.1.1 postgresql_cluster_port=5432 postgresql_harole=master
host2 ansible_host=2.2.2.2 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
host3 ansible_host=3.3.3.3 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
host4 ansible_host=4.4.4.4 postgresql_cluster_port=5432 postgresql_harole=worker postgresql_master_ip=1.1.1.1
We can group hosts using the postgresql_harole variable like this:
- hosts: all
gather_facts: false
tasks:
- name: create group of postgresql workers
group_by:
key: "pg_role_{{ postgresql_harole|default('none') }}"
- hosts: pg_role_worker
gather_facts: false
tasks:
- run_once: true
debug:
var: groups.pg_role_worker
- debug:
msg: "Hello world"
Running the above playbook would generate output similar to:
PLAY [all] ***********************************************************************************
TASK [create group of postgresql workers] ****************************************************
changed: [host1]
changed: [host2]
changed: [host3]
changed: [host4]
PLAY [pg_role_worker] ************************************************************************
TASK [debug] *********************************************************************************
ok: [host2] => {
"groups.pg_role_worker": [
"host2",
"host3",
"host4"
]
}
TASK [debug] *********************************************************************************
ok: [host2] => {
"msg": "Hello world"
}
ok: [host3] => {
"msg": "Hello world"
}
ok: [host4] => {
"msg": "Hello world"
}
PLAY RECAP ***********************************************************************************
host1 : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host3 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
host4 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can see that (a) there now exists a group pg_role_worker containing hosts host2, host3, and host4 and that (b) the final debug task runs only on those hosts.

Ansible can we use if condition at hosts

How to run the playbook on a specific set of hosts with a conditional variable
vars_file.yml
deployment: blue
hosts_file.yml
[east1]
127.0.0.1
127.0.0.2
[west2]
127.0.0.3
127.0.0.4
playbook.yml
---
hosts: all
vars_files:
- 'vars_file.yml'
tasks:
- copy: src=config dest=/tmp/
hosts: {{ east1[0] if deployment == "blue" else west2[0]}}
vars_files:
- 'vars_file.yml'
tasks:
- shell: "./startup_script restart"
Note: I cant pass variables through the command line and I cant segregate task to a new playbook.
You can access variables defined on another host by targeting the hostvars dictionary key of that host.
In order to do that though, you need to register the variable on the host, with set_fact, importing it won't be enough.
Here is an example, given the inventory:
all:
children:
east1:
hosts:
east_node_1:
ansible_host: node1
east_node_2:
ansible_host: node4
west2:
hosts:
west_node_1:
ansible_host: node2
west_node_2:
ansible_host: node3
And the playbook:
- hosts: localhost
gather_facts: no
vars_files:
- vars_file.yml
tasks:
- set_fact:
deployment: "{{ deployment }}"
- hosts: >-
{{ groups['east1'][0]
if hostvars['localhost'].deployment == 'blue'
else groups['west2'][0]
}}
gather_facts: no
tasks:
- debug:
This would yield the recaps:
PLAY [localhost] *******************************************************************************************************************
TASK [set_fact] ********************************************************************************************************************
ok: [localhost]
PLAY [east_node_1] *****************************************************************************************************************
TASK [debug] ***********************************************************************************************************************
ok: [east_node_1] => {
"msg": "Hello world!"
}
PLAY RECAP *************************************************************************************************************************
east_node_1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
when vars_file.yml contains
deployment: blue
And
PLAY [localhost] *******************************************************************************************************************
TASK [set_fact] ********************************************************************************************************************
ok: [localhost]
PLAY [west_node_1] *****************************************************************************************************************
TASK [debug] ***********************************************************************************************************************
ok: [west_node_1] => {
"msg": "Hello world!"
}
PLAY RECAP *************************************************************************************************************************
west_node_1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
when vars_file.yml contains
deployment: red
Another equivalent construction, using patterns to target the group, would just see the host target changed to:
- hosts: >-
{{ 'east1[0]'
if hostvars['localhost'].deployment == 'blue'
else 'west2[0]'
}}

Ansible-playbook: list all servers as comma separated and exclude own server

I have three servers
[servers]
server1
server2
server3
I would like to create for each server list of servers without including itself: for example
for server1: it should be server2,server3;
for server2: it should be server1,server3;
for server3: it should be server1,server2;
I can create list of all servers but don't know how to exclude one server ?
- hosts: servers
vars:
network_check_list: "{{groups['servers']|join(',')}}"
You can use the difference filter with a single element list containing the current targeted server as argument:
---
- hosts: servers
gather_facts: false
vars:
network_check_list: "{{ groups['servers'] | difference([inventory_hostname]) | join(',') }}"
tasks:
- debug:
var: network_check_list
Since the jinja2 expression is interpreted on spot and for each run on a particular server, you can keep this definition in your playbook vars and it will be adapted to each context in the task. here is the result (used your example inventory):
$ ansible-playbook -i inventory play.yml
PLAY [servers] ****************************************************************************************************************************************************************************************************
TASK [debug] ******************************************************************************************************************************************************************************************************
ok: [server1] => {
"network_check_list": "server2,server3"
}
ok: [server2] => {
"network_check_list": "server1,server3"
}
ok: [server3] => {
"network_check_list": "server1,server2"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
server1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Ref: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#set-theory-filters

Issue adding duplicate name with different ansible_user to add_host dynamic inventory

Here is my playbook that builds a dynamic inventory using add_host:
---
- name: "Play 1"
hosts: localhost
gather_facts: no
tasks:
- name: "Search database"
command: > mysql --user=root --password=p#ssword deployment
--host=localhost -Ns -e "SELECT dest_ip,username FROM deploy_dets"
register: command_result
- name: Add hosts
add_host:
name: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"
- hosts: dest_nodes
gather_facts: false
tasks:
- debug:
msg: Run the shell script with the arguments `{{ ansible_user }}` here"
The Output is good and as expected when the 'name:' attribute of add_host are of different values IPs viz '10.9.0.100' & '10.8.2.144'
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.9.0.100 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.9.0.100] => {
"msg": "Run the shell script with the arguments `user1` here\""
}
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
10.9.0.100 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
The problem is when the 'name:' attribute for add_host gets duplicate entry say 10.8.2.144 despite having unique 'ansible_user' value the play ignores the first name, ansible_user entry and runs only once with the latest final entry.
$ ansible-playbook duplicate_hosts.yml
PLAY [Play 1] ***********************************************************************************************************************************************
TASK [Search database] **************************************************************************************************************************************
changed: [localhost]
TASK [Add hosts] ********************************************************************************************************************************************
changed: [localhost] => (item=10.8.2.144 user1)
changed: [localhost] => (item=10.8.2.144 user2)
PLAY [dest_nodes] *******************************************************************************************************************************************
TASK [debug] ************************************************************************************************************************************************
ok: [10.8.2.144] => {
"msg": "Run the shell script with the arguments `user2` here\""
}
PLAY RECAP **************************************************************************************************************************************************
10.8.2.144 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
localhost : ok=2 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Interestingly the debug shows two entries for add_host name: 10.8.2.144 with the different ansible_users i.e 'user1' and 'user2' but when we run the group it runs just the single and latest name entry and seen in the output above.
I'm on the latest version of ansible.
Can you please provide some solution where i can run the play for every unique 'ansible_user' on the same host ?
In summary: I wish to run multiple tasks on the same host first with 'user1' and then with 'user2'
You can add a alias as inventory hostname. Here I have given the username as hostname(alias).
Please try this, I have not tested it.
- name: Add hosts
add_host:
hostname: "{{ item.split('\t')[1] }}"
ansible_host: "{{ item.split('\t')[0] }}"
ansible_user: "{{ item.split('\t')[1] }}"
groups: dest_nodes
with_items: "{{ command_result.stdout_lines }}"

Resources