Is it possible to customize a IP adress with Ansible. For example when the remote system IP is 127.34.34.21 the DNS IP is 127.34.34.1
The DNS IP is always the system IP but the last number is 1. Is it possible that Ansible can do this dynamical?
Look up the system IP > Customize it > set the new IP as DNS IP.
Q: "Is it possible to customize an IP address with Ansible?"
A: Yes. It is. There are many options.
Use split method and join filter. For example
- set_fact:
dns: "{{ ip.split('.')[:3]|join('.') ~ '.1' }}"
vars:
ip: 127.34.34.21
- debug:
var: dns
give
dns: 127.34.34.1
It's possible to use the splitext filter instead of split/join. For example, the task below gives the same result
- set_fact:
dns: "{{ (ip|splitext)[0] ~ '.1' }}"
Next option is ipaddr filter. For example, the task below gives the same result
- set_fact:
dns: "{{ (ip ~ '/24')|ipaddr('first_usable') }}"
Q: "Is it possible that Ansible can do this dynamical?"
A: Yes. It is. You'll have to decide which IP address to use. For example, use ansible_default_ipv4.address
- debug:
var: ansible_default_ipv4.address
- set_fact:
dns: "{{ (ip ~ '/24')|ipaddr('first_usable') }}"
vars:
ip: "{{ ansible_default_ipv4.address }}"
- debug:
var: dns
give (abridged)
ansible_default_ipv4.address: 10.1.0.27
dns: 10.1.0.1
See setup. See other attributes of the variable ansible_default_ipv4. See variable ansible_all_ipv4_addresses.
Related
I'm trying to assign hostname to a few hosts, gathering it from inventory.
My inventory is:
[masters]
master.domain.tld
[workers]
worker1.domain.tld
worker2.domain.tld
[all:vars]
ansible_user=root
ansible_ssh_private_key_file=~/.ssh/id_rsa
The code I'm using is:
- name: Set hostname
shell: hostnamectl set-hostname {{ item }}
with_items: "{{ groups['all'] }}"
Unfortunately, code iterate all items (both IPs and hostnames) and assigns to all 3 hosts the last item...
Any help would be much appreciated.
Don't loop: this is going through all your hosts on each target. Use only the natural inventory loop and the inventory_hostname special var.
Moreover, don't use shell when there is a dedicated module
- name: Assign hostname
hosts: all
gather_facts: false
tasks:
- name: Set hostname for target
hostname:
name: "{{ inventory_hostname }}"
I am trying to build up a config file that contains the list of my inventory host servers and their fields eg. IP,FQDN etc.
Here is my part of inventory file :
ocp_cluster:
hosts:
anbootstrap.ocp.hattusas.tst:
fqdn: anbootstrap.ocp.hattusas.tst
ip: 10.72.217.92
anmaster1.ocp.hattusas.tst:
fqdn: anmaster1.ocp.hattusas.tst
ip: 10.72.217.93
anmaster2.ocp.hattusas.tst:
fqdn: anmaster2.ocp.hattusas.tst
ip: 10.72.217.94
anmaster3.ocp.hattusas.tst:
And here is my playbook:
- name: Adding OCP Clusters to DHCP configuration
debug:
"{{ hostvars[item][fqdn] }}"
loop: "{{ groups['ocp_cluster'] }}"
(I will use blockinfile soon)
When I ran my playbook I am getting undefined error fqdn. I tried using a for loop and it didn't help. Any suggestions?
Thanks a lot.
The fixed task is below
- debug:
msg: "{{ hostvars[item]['fqdn'] }}"
loop: "{{ groups['ocp_cluster'] }}"
Debug parameter msg was missing
fqdn is an attribute of a dictionary. It must be quoted in brackets. Similar to ocp_cluster.
It's possible to use the dot notation and simplify the references to the attributes of the dictionaries
- debug:
msg: "{{ hostvars[item].fqdn }}"
loop: "{{ groups.ocp_cluster }}"
The following:
store_controller:
hosts:
SERVER:
ansible_host: "{{ STORE_CTL }}"
mgmt_ip: "{{ ansible_host }}"
global_mgmt:
hosts:
SERVER:
ansible_host: "{{ NOMAD_SERVER }}"
mgmt_ip: "{{ ansible_host }}"
node_exporter: ##if comment the part from here to end, it's ok####
hosts:
"{{ item }}":
ansible_host: "{{ item }}"
mgmt_ip: "{{ ansible_host }}"
with_items:
- "172.7.7.1"
- "172.7.7.9"
- "172.7.7.12"
But Ansible doesn't let me use 'with_items' here. It seems that ansible does not support iterator over hosts.
How can I define the hosts array in group node_exporter for my three IPs ?
All you need to define is this:
node_exporter:
hosts:
172.7.7.1:
172.7.7.9:
172.7.7.12:
vars:
mgmt_ip: "{{ inventory_hostname }}"
Explanation:
mgmt_ip defines only a template (string value) which will be resolved at the time it is used.
For each target machine inventory_hostname (thus mgmt_ip) will resolve to the IP address of currently executing host.
Using ansible_host to assign the same value as the inventory host name is an empty action, so you don't need that at all.
I don't think it brings any value/clarity to the code, but since you commented it was working for you, that's the way to achieve it with multiple hosts. All you achieve is creating an alias mgmt_ip to inventory_hostname.
Regarding the premise of the question:
with_items: for YAML is a dictionary key name (string value). It is Ansible that might or might not make use of it.
It makes use of it when it is specified in a task (there it has a semantic meaning).
Otherwise it either ignores it (never uses this key), or reports an error.
Looking quickly at your code, you have an indentation problem.
Try to add 2 spaces before "{{item}}:"
Here is a simple cluster inventory:
[cluster]
host1
host2
host3
Each host has an interface configured with ipv4 address. This information could be gathered with setup module and will be in ansible_facts.ansible_{{ interface }}.ipv4.address.
How do to get IPv4 addresses for interface from each individual host and make them available to each host in cluster so that each host knows all cluster IPs?
How this could be implemented in a role?
As #larsks already commented, the information is available in hostvars. If you, for example, want to print all cluster IPs you'd iterate over groups['cluster']:
- name: Print IP addresses
debug:
var: hostvars[item]['ansible_eth0']['ipv4']['address']
with_items: "{{ groups['cluster'] }}"
Note that you'll need to gather the facts first to populate hostvars for the cluster hosts. Make sure you have the following in the play where you call the task (or in the role where you have the task):
- name: A play
hosts: cluster
gather_facts: yes
After searching for a while on how to make list variables with Jinja2. Here is complete solution that requires cluster_interface variable and pings all nodes in cluster to check connectivity:
---
- name: gather facts about {{ cluster_interface }}
setup:
filter: "ansible_{{ cluster_interface }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ ansible_play_hosts }}"
when: hostvars[item]['ansible_' + cluster_interface] is not defined
- name: cluster nodes IP addresses
set_fact:
cluster_nodes_ips: "{{ cluster_nodes_ips|default([]) + [hostvars[item]['ansible_' + cluster_interface]['ipv4']['address']] }}"
with_items: "{{ ansible_play_hosts }}"
- name: current cluster node IP address
set_fact:
cluster_current_node_ip: "{{ hostvars[inventory_hostname]['ansible_' + cluster_interface]['ipv4']['address'] }}"
- name: ping all cluster nodes
command: ping -c 1 {{ item }}
with_items: "{{ cluster_nodes_ips }}"
changed_when: false
I am trying to get from an inventory group a list of IP addresses instead of host names. These IP addresses are already in my hostvars but I cant make them matching a list from their group name.
Here is my original playbook :
- name: Install Kafka
hosts: kafka
roles:
- role: kafka
kafka_hosts: "{{ groups['kafka'] | list }}"
kafka_zookeeper_hosts: "{{ groups['zookeeper'] | list }}"
As you can see, kafka_hosts and kafka_zookeeper_hosts are lists.
The result of this playbook to my servers configuration is correct :
/etc/kafka/consumer.properties:zookeeper.connect=opo-4:2181,opo-5:2181,opo-6:2181
/etc/kafka/producer.properties:metadata.broker.list=opo-1:9092,opo-2:9092,opo-3:9092
Unfortunately, this configuration does not work for all my servers because of DNS issues. I need a list of IP addresses instead of hostnames.
If I set manually the IP addresses in my playbook, my configuration works, unfortunately not dynamically :
kafka_hosts:
- 192.168.48.1
- 192.168.48.2
- 192.168.48.3
kafka_zookeeper_hosts:
- 192.168.48.4
- 192.168.48.5
- 192.168.48.6
And the result on my servers make it working :
/etc/kafka/consumer.properties:zookeeper.connect=192.168.48.4:2181,192.168.48.5:2181,192.168.48.6:2181
/etc/kafka/producer.properties:metadata.broker.list=192.168.48.1:9092,192.168.48.2:9092,192.168.48.3:9092
In this configuration I have to maintain manually my groups and my playbook with the same information. So I would like to get the hostvars from groups kafka and zookeeper.
Here are hostvars and groups, defined in the main inventory file :
[opo]
opo-1 ansible_host=192.168.48.1
opo-2 ansible_host=192.168.48.2
...
opo-6 ansible_host=192.168.48.6
[zookeeper]
opo-1
opo-2
opo-3
[kafka]
opo-4
opo-5
opo-6
What I have tried - based on answers found on stackoverflow from similar issues :
kafka_hosts: "{{ groups[['kafka']['ansible_host']] | list }}"
kafka_hosts: "{{ [hostvars[groups['kafka'][0]]['ansible_host']] | list }}" # the list contains only the first ip address
kafka_hosts: "{{ [hostvars[groups['kafka']]['ansible_host']] | list }}"
kafka_hosts: "{{ hostvars[groups['kafka']].ansible_host | list }}"
kafka_hosts: "{{ [hostvars.[groups['kafka']].ansible_host] | list }}"
kafka_hosts: "{{ groups['kafka'].ansible_host | list }}"
None of them gives the expected list of IP addresses.
You need to use extract and map combination:
kafka_hosts: "{{ groups['kafka'] | map('extract',hostvars,'ansible_host') | list }}"