Ansible Setup Module to search and find an IP address - ansible

My hosts have 3 network IP addresses and one of them is needed later in my playbook.
In my playbook I have ran the following setup module:
- name: Gather Networks Facts into Variable
setup:
register: setup
- name: Debug Set Facts
debug:
var: setup.ansible_facts.ansible_ip_addresses
The provides the following output:
{
"setup.ansible_facts.ansible_ip_addresses": [
"10.0.2.15",
"fe80::85ae:2178:df12:8da0",
"192.168.99.63",
"fe80::3871:2201:c0ab:6e39",
"192.168.0.63",
"fe80::79c5:aa03:47ff:bf65",
"fd89:8d5f:2227:0:79c5:aa03:47ff:bf65",
"2a02:c7f:9420:7100:79c5:aa03:47ff:bf65"
]
}
I am trying to find a way to find the 192.168.0.63 by searching using the first three octets or 192.168.0. I also then want to get that value into a fact so I can use this later in my playbook.
What would be the best way to search and find that value with Ansible or Jinja2?

Will this do?
- set_fact:
my_fact: "{{ (setup.ansible_facts.ansible_ip_addresses | select('match','192.168.0.') | list)[0] }}"
If there are multiple values matching the pattern, it will get the first one in order.

Related

Ansible: How to find aggregated file size across inventory hosts?

I'm able to find the total size of all the three files in variable totalsize on a single host as shown below.
cat all.hosts
[destnode]
myhost1
myhost2
myhost3
cat myplay.yml
- name: "Play 1"
hosts: "destnode"
gather_facts: false
tasks:
- name: Fail if file size is greater than 2GB
include_tasks: "{{ playbook_dir }}/checkfilesize.yml"
with_items:
- "{{ source_file_new.splitlines() }}"
cat checkfilesize.yml
- name: Check file size
stat:
path: "{{ item }}"
register: file_size
- set_fact:
totalsize: "{{ totalsize | default(0) |int + ( file_size.stat.size / 1024 / 1024 ) | int }}"
- debug:
msg: "TOTALSIZE: {{ totalsize }}"
To run:
ansible-playbook -i all.hosts myplay.yml -e source_file_new="/tmp/file1.log\n/tmp/file1.log\n/tmp/file1.log"
The above play works fine and gets me the total sum of sizes of all the files mentioned in variable source_file_new on individual hosts.
My requirement is to get the total size of all the files from all the three(or more) hosts mention is destnode group.
So, if each file is 10 MB on each host, the current playbook prints 10+10+10=30MB on host1 and like wise on host2 and host3.
Instead, I wish to the the sum of all the sizes from all the hosts like below
host1 (10+10+10) + host2 (10+10+10) + host3 (10+10+10) = 90MB
Extract the totalsize facts for each node in destnode from hostvars and sum them up.
In a nutshell, at the end of your current checkfilesize.yml task file, replace the debug task:
- name: Show total size for all nodes
vars:
overall_size: "{{ groups['destnode'] | map('extract', hostvars, 'totalsize')
| map('int') | sum }}"
debug:
msg: "Total size for all nodes: {{ overall_size }}"
run_once: true
If you need to reuse that value later, you can store it at once in a fact that will be set with the same value for all hosts:
- name: Set overall size as fact for all hosts
set_fact:
overall_size: "{{ groups['destnode'] | map('extract', hostvars, 'totalsize')
| map('int') | sum }}"
run_once: true
- name: Show the overall size (on result with same value for each host)
debug:
msg: "Total size for all nodes: {{ overall_size }} - (from {{ inventory_hostname }})"
As an alternative, you can replace set_fact with a variable declaration at play level.
It seems you are trying to implement (distributed) programming paradigms which aren't plain possible, at least not in that way and since Ansible is not a programming language or something for distributed computing but a Configuration Management Tool in which you declare a state. Therefore those are not recommended and should probably avoided.
Since your use case looks for me like in a normal MapReduce environment I understand from your description that you like to implement a kind of Reducer in a Distributed Environment in Ansible.
You made already the observation that the facts are distributed over your hosts in your environment. To sum them up it will be necessary that they become aggregated on one of the hosts, probably the Control Node.
To do so:
It might be possible to use Delegating facts for your task set_fact to get all necessary information to sum up onto one host
An other approach could be to let your task creating and adding custom facts about the summed up filesize during run. Those Custom Facts could become gathered and cached on the Control Node during next run.
A third option and since Custom Facts can be simple files, one could probably create a simple cronjob which creates the necessary .fact file with requested information (filesize, etc.) on a scheduled base.
Further Documentation
facts.d or local facts
Introduction to Ansible facts
Similar Q&A
Ansible: How to define ... a global ... variable?
Summary
My requirement is to get the total size of all the files from all the three (or more) hosts ...
Instead of creating a playbook which is generating and calculating values (facts) during execution time it is recommended to define something for the Target Nodes and create a playbook which is just collecting the facts in question.
In example
... add dynamic facts by adding executable scripts to facts.d. For example, you can add a list of all users on a host to your facts by creating and running a script in facts.d.
which can also be about files and the size.

How to write loop in ansible with extra variable

I have got list of the mac address in the text file mac.txt such as:
[
"08f1.ea6d.033c",
"08f1.ea6d.033d",
"08f1.ea6d.033e",
"08f1.ea6d.033f",
"b883.0381.4b.20",
"b883.0381.4b21",
"b883.0384.d51c",
"b883.0384.d51d"
]
Now I want to check one by one above mac addresses in switch, to verify the connectivity of the server & switch and all these mac addresses may not be exist on the switch. Let's say only two exist and that mac needs to be stored in variable.
Note: these mac addresses may vary.
And here is the switch playbook which I was writing:
- name: Run the show lldp neighbors command & find out the switch port
ios_command:
commands: show mac address-table | in {{ macdress }}
with_item:
- macaddress
What is the correct playbook to achieve the requirement?
Since your text file is a valid json list, you simply have to read its content using the file lookup and load it inside a variable using the from_json filter. You can then use the variable (in a loop or anything else).
---
- name: Load values from json file demo
hosts: localhost
gather_facts: false
vars:
macaddresses: "{{ lookup('file', 'mac.txt') | from_json }}"
tasks:
- name: Show imported macs list
debug:
var: macaddresses
- name: Loop over imported macs
debug:
msg: "I'm looping over mac {{ item }}"
loop: "{{ macaddresses }}"
Note: loop is the recent syntax for looping and is the equivalent of with_list. In this situation you can perfectly replace it with with_list or with_item which are not deprecated and will do the same job. See the ansible loop documentation for more info.

How to use variables between different roles in ansible

my playbook structure looks like:
- hosts: all
name: all
roles:
- roles1
- roles2
In tasks of roles1, I define such a variable
---
# tasks for roles1
- name: Get the zookeeper image tag # rel3.0
run_once: true
shell: echo '{{item.split(":")[-1]}}' # Here can get the string rel3.0 normally
with_items: "{{ret.stdout.split('\n')}}"
when: "'zookeeper' in item"
register: zk_tag
ret.stdout:
Loaded image: test/old/kafka:latest
Loaded image: test/new/mysql:v5.7
Loaded image: test/old/zookeeper:rel3.0
In tasks of roles2, I want to use the zk_tag variable
- name: Test if the variable zk_tag can be used in roles2
debug: var={{ zk_tag.stdout }}
Error :
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'stdout'
I think I encountered the following 2 problems:
When registering a variable with register, when condition is added, this variable cannot be used in all groups. How to solve this problem? How to make this variable available to all groups?
is my title, How to use variables between different roles in ansible?
You're most likely starting a new playbook for a new host. Meaning all previous collected vars are lost.
What you can do is pass a var to another host with the add_host module.
- name: Pass variable from this play to the other host in the same play
add_host:
name: hostname2
var_in_play_2: "{{ var_in_play_1 }}"
--- EDIT ---
It's a bit unclear. Why do you use the when statement in the first place if you want every host in the play for it to be available?
You might want to use the group_vars/all.yml file to place vars in.
Also, using add_host should be the way to go as how I read it. Can you post your playbook, and the outcome of your playbook on a site, e.g. pastebin?
If there is any chance the var is not defined because of a when condition, you should use a default value to force the var to be defined when using it. While you are at it, use the debug module for your tests rather than echoing something in a shell
- name: Debug my var
debug:
msg: "{{ docker_exists | default(false) }}"

How can I store custom facts about remote hosts in aggregate before proceeding with rest of the runbook?

I'm writing a cluster provisioning playbook in Ansible that requires each node to be configured with the public certificates of the other nodes at installation time. I can't think of an easy way to tell ansible to:
Go fetch the remote certs
Shove them into a list
Make those cert summaries available to each remote node for generating the authorized list of nodes
At the moment, given the small amount of cluster-nodes, I'm going to do this by-hand (copy output of the first playbook into the variables of the second) but it would be most helpful if there was a way to do this in a single playbook.
My answer will be as general as possible: storing a fact on a particular machine in a group and read that fact for all machines in group from an other machine.
I take for granted your playbook is actually targeting a group my_node_group containing all your cluster nodes.
Store an info from a remote node in its own facts (or get this directly from your inventory...).
# This one should be replaced with getting certs in your context
# with whatever solution is best suited for you.
- name: Get an info from current machine
shell: echo "I'm a dummy task running on {{ inventory_hostname }}"
register: my_info_cmd
- name: Push info in a fact for current node
set_fact:
my_info: "{{ my_info_cmd.stdout }}"
This is the really usefull part: use the stored info elsewhere
- name: example loop to access 'my_info` on each machines of group `my_node_group`
debug:
var: item
loop: >-
{{
groups['my_node_group']
| map('extract', hostvars, 'my_info')
| list
}}
Explanation of last step
Get machines in the group my_node_group
Use those name to map the extract filter on hostvars and get a list of corresponding facts hashes where you only retain the my_info attribute
Transform the returned map object to a list an loop over it.

Can I update the hosts inventory and use new hosts in same playbook?

I'm adding few hosts in the hosts inventory file through playbook. Now I'm using those newly added hosts in the same playbook. But those newly added hosts are not readble by the same playbook in the same run it seems, because I get -
skipping: no hosts matched
When I run it separately, i.e. I update hosts file through one playbook and use the updated hosts in it through another playbook, it works fine.
I wanted to do something like this recently, using ansible 1.8.4. I found that add_host needs to use a group name, or the play will be skipped with "no hosts matched". At the same time I wanted play #2 to use facts discovered in play #1. Variables and facts normally remain scoped to each host, so this requires using the magic variables hostvars and groups.
Here's what I came up with. It works, but it's a bit ugly. I'd love to see a cleaner alternative.
# test.yml
#
# The name of the active CFN stack is provided on the command line,
# or is set in the environment variable AWS_STACK_NAME.
# Any host in the active CFN stack can tell us what we need to know.
# In real life the selection is random.
# For a simpler demo, just use the first one.
- hosts:
tag_aws_cloudformation_stack-name_{{ stack
|default(lookup('env','AWS_STACK_NAME')) }}[0]
gather_facts: no
tasks:
# Get some facts about the instance.
- action: ec2_facts
# In real life we might have more facts from various sources.
- set_fact: fubar='baz'
# This could be any hostname.
- set_fact: hostname_next='localhost'
# It's too late for variables set in this play to affect host matching
# in the next play, but we can add a new host to temporary inventory.
# Use a well-known group name, so we can set the hosts for the next play.
# It shouldn't matter if another playbook uses the same name,
# because this entry is exclusive to the running playbook.
- name: add new hostname to temporary inventory
connection: local
add_host: group=temp_inventory name='{{ hostname_next }}'
# Now proceed with the real work on the designated host.
- hosts: temp_inventory
gather_facts: no
tasks:
# The host has changed, so the facts from play #1 are out of scope.
# We can still get to them through hostvars, but it isn't easy.
# In real life we don't know which host ran play #1,
# so we have to check all of them.
- set_fact:
stack='{{ stack|default(lookup("env","AWS_STACK_NAME")) }}'
- set_fact:
group_name='{{ "tag_aws_cloudformation_stack-name_" + stack }}'
- set_fact:
fubar='{% for h in groups[group_name] %} {{
hostvars[h]["fubar"]|default("") }} {% endfor %}'
- set_fact:
instance_id='{% for h in groups[group_name] %} {{
hostvars[h]["ansible_ec2_instance_id"]|default("") }} {% endfor %}'
# Trim extra leading and trailing whitespace.
- set_fact: fubar='{{ fubar|replace(" ", "") }}'
- set_fact: instance_id='{{ instance_id|replace(" ", "") }}'
# Now we can use the variables instance_id and fubar.
- debug: var='{{ fubar }}'
- debug: var='{{ instance_id }}'
# end
It's not entirely clear what you're doing - but from what I gather, you're using the add_host module in a play.
It seems logical that you cannot limit that same play to those hosts, because they don't exist yet... so this can never work:
- name: Play - add a host
hosts: new_host
tasks:
- name: add new host
add_host: name=new_host
But you're free to add multiple plays to a single plabook file (which you also seem to have figured out):
- name: Play 1 - add a host
hosts: a_single_host
tasks:
- name: add new host
add_host: name=new_host
- name: Play 2 - do stuff
hosts: new_host
tasks:
- name: do stuff
It sounds like you are modifying the Ansible inventory file with your playbook, and then wanting to use the new contents of the file. Just modifying the contents of the file on disk, however, won't cause the inventory that Ansible is working with to be updated. The way Ansible works is that it reads that file (and any other inventory source you have) when it first begins and puts the host names it finds into memory. From then on it works only with the inventory that it has stored in memory, the stuff that existed when it first started running. It has no knowledge of any subsequent changes to the file.
But there are ways to do what you want! One option you could use is to add the new host into the inventory file, and also load it into memory using the add_host module. That's two separate steps: 1) add the new host to the file's inventory, and then 2) add the same new host to in-memory inventory using the add_host module:
- name: add a host to in-memory inventory
add_host:
name: "{{ new_host_name }}"
groups: "{{ group_name }}"
A second option is to tell Ansible to refresh the in-memory inventory from the file. But you have to explicitly tell it to do that. Using this option, you have two related steps: 1) add the new host to the file's inventory, like you already did, and then 2) use the meta module:
- name: Refresh inventory to ensure new instances exist in inventory
meta: refresh_inventory

Resources