Ansible Replace InLine Results in Wrong Content Hosts File - ansible

I'm using the following task to update /etc/hosts file on my inventory, but strangely I see a different alias being enetered.
- debug:
msg: "hostname = {{ inventory_hostname }} has ansible_fqdn = {{ ansible_fqdn }} and ansible_ipv4 = {{ ansible_default_ipv4.address }} and ansible_hostname = {{ ansible_hostname }}"
- name: Fix /etc/hosts removing the old hostname
become: true
throttle: 1
lineinfile:
state=present
dest=/etc/hosts
line="{{ ansible_default_ipv4.address }} {{ inventory_hostname }} {{ ansible_hostname }}"
regexp="^{{ ansible_default_ipv4.address }}"
The output of the debug statement is:
TASK [common : debug] ***********************************************************************************************************************************************************************************************
ok: [n2.open-electrons.com] => {
"msg": "hostname = n2.my-host.com has ansible_fqdn = n2.my-host.com and ansible_ipv4 = 192.168.0.203 and ansible_hostname = n2"
}
But when I SSH into the remote machine, I get to see the following in my /etc/hosts:
127.0.1.1 n2.my-host.com n1
127.0.0.1 localhost
Any ideas as to what might have gone wrong? I have another machine in the inventory which is configured as n1.my-host.com, but how come the alias is getting messed up?
My inventory file:
[master]
m1.my-host.com ansible_host=192.168.0.100
[node]
n1.my-host.com ansible_host=192.168.0.102
n2.my-host.com ansible_host=192.168.0.103
[k3s_cluster:children]
master
node
[all:vars]
ansible_python_interpreter=/usr/bin/python3

Any ideas as to what might have gone wrong? ... how come the alias is getting messed up?
Because of the hint within your example code
- name: Fix /etc/hosts removing the old hostname
it is assumed that your are going to change the hostnames of the Remote Nodes.
Therefore the issue might be caused by the fact that the content of ansible_hostname is a gathered fact on the Remote Node dynamically during runtime and by the setup module, whereby the inventory_hostname is a Special variable from the inventroy file on the Control Node.
inventory_hostname The inventory name for the ‘current’ host being iterated over in the play
inventory_hostname_short The short version of inventory_hostname
So to prevent a mismatch of information from different sources you could use the approach of single source of truth.
- name: Fix /etc/hosts removing the old hostname
become: true
lineinfile:
path: /etc/hosts
regexp: "^{{ ansible_default_ipv4.address }}"
line: "{{ ansible_default_ipv4.address }} {{ inventory_hostname }} {{ inventory_hostname_short }}"
state: present
Please notice that on the Remote Node itself can be also different sources for the hostname. Therefore you might need not only to change it on /etc/hosts but also in several different parts
- name: Set hostname to inventory_hostname
shell:
cmd: "nmcli general hostname {{ inventory_hostname }} && nmcli general hostname"
depending on your target system even
hostnamectl

Related

Trying to get Hostname from A group

I am trying to get the inventory_hostname from a hosts group I created.
What I am really trying to achieve is copying a file with a new name to a remote server, while its new name will be other remote servers hostname.
{{ inventory_hostname }} gives IP address and {{ ansible_hostname }} returns hostname as default variables but while doing this, I am stuck:
- name: Copy File
hosts: nagios
become: yes
tasks:
- shell: cp /home/xxx/test.txt /home/xxx/{{ item }}.txt
with_items: "{{ groups['dbs'] }}"
{{ item }} returns the IP address but I want the hostname, not the IP for this one. Any help would be appreciated.

Ansible assign hostname using inventory items

I'm trying to assign hostname to a few hosts, gathering it from inventory.
My inventory is:
[masters]
master.domain.tld
[workers]
worker1.domain.tld
worker2.domain.tld
[all:vars]
ansible_user=root
ansible_ssh_private_key_file=~/.ssh/id_rsa
The code I'm using is:
- name: Set hostname
shell: hostnamectl set-hostname {{ item }}
with_items: "{{ groups['all'] }}"
Unfortunately, code iterate all items (both IPs and hostnames) and assigns to all 3 hosts the last item...
Any help would be much appreciated.
Don't loop: this is going through all your hosts on each target. Use only the natural inventory loop and the inventory_hostname special var.
Moreover, don't use shell when there is a dedicated module
- name: Assign hostname
hosts: all
gather_facts: false
tasks:
- name: Set hostname for target
hostname:
name: "{{ inventory_hostname }}"

Ansible: How to Define Variables in Playbook Which Do Not Change per Host

How to set variables in an Ansible playbook which do not change per host?
Per S.O.P. before posting, I read the Ansible docs on Using Variables, etc., and of course searched Stack Overflow, and the Internet for possible answers. What I've seen discussed was where to define variables, but not how to set variables in a playbook which do not change with each host in the inventory.
I have an Ansible playbook where variables are set from Ansible-facts.
The variables are used to create a string with the current date and time, which is used to as the filename for a log.
e.g. HealthCheckReport-YYYY-MM-DD_HHMM.txt
A time stamped file is created, then the results from the command run for each server is written to this file.
If the time (minutes) changes while the play is still iterating through the inventory, the variable changes, throwing a "path does not exist" error for each of the remaining hosts.
The example below is an Ansible playbook which runs the nslookup command for the hosts listed in the default inventory file.
Set and concatenate variables
Create a file with a time stamped filename (The OS is SuSe Linux)
Run the nslookup command on hosts in the inventory file
Write the command results to the time stamped file
---
- name: Output to Filename with Timestamp
hosts: healthchecks
connection: local
gather_facts: yes
strategy: linear
order: inventory
vars:
report_filename_prefix: "HealthCheckResults-"
report_date_time: "{{ ansible_date_time.date }}_{{ ansible_date_time.hour }}{{ ansible_date_time.minute }}"
report_filename_date: "{{ report_filename_prefix }}{{ report_date_time }}.txt"
report_path: "/reports/healthchecks/{{ report_filename_date }}"
tasks:
- name: Create file with timestamped filename
delegate_to: localhost
lineinfile:
path: "{{ report_path }}"
create: yes
line: "Start: Health Check Report\n{{ report_path }}"
run_once: true
- name: Run nslookup command
delegate_to: localhost
throttle: 1
command: nslookup {{ inventory_hostname }}
register: nslookup_result
- name: Append nslookup results to a file
delegate_to: localhost
throttle: 1
blockinfile:
state: present
insertafter: EOF
dest: "{{ report_path }}"
marker: "- - - - - - - - - - - - - - - - - - - - -"
block: |
Server: {{ inventory_hostname }}
Environment: {{ environmentz }}
{{ nslookup_result.stdout_lines.3 }}
{{ nslookup_result.stdout_lines.4 }}

Ansible gather IPs of nodes in cluster

Here is a simple cluster inventory:
[cluster]
host1
host2
host3
Each host has an interface configured with ipv4 address. This information could be gathered with setup module and will be in ansible_facts.ansible_{{ interface }}.ipv4.address.
How do to get IPv4 addresses for interface from each individual host and make them available to each host in cluster so that each host knows all cluster IPs?
How this could be implemented in a role?
As #larsks already commented, the information is available in hostvars. If you, for example, want to print all cluster IPs you'd iterate over groups['cluster']:
- name: Print IP addresses
debug:
var: hostvars[item]['ansible_eth0']['ipv4']['address']
with_items: "{{ groups['cluster'] }}"
Note that you'll need to gather the facts first to populate hostvars for the cluster hosts. Make sure you have the following in the play where you call the task (or in the role where you have the task):
- name: A play
hosts: cluster
gather_facts: yes
After searching for a while on how to make list variables with Jinja2. Here is complete solution that requires cluster_interface variable and pings all nodes in cluster to check connectivity:
---
- name: gather facts about {{ cluster_interface }}
setup:
filter: "ansible_{{ cluster_interface }}"
delegate_to: "{{ item }}"
delegate_facts: True
with_items: "{{ ansible_play_hosts }}"
when: hostvars[item]['ansible_' + cluster_interface] is not defined
- name: cluster nodes IP addresses
set_fact:
cluster_nodes_ips: "{{ cluster_nodes_ips|default([]) + [hostvars[item]['ansible_' + cluster_interface]['ipv4']['address']] }}"
with_items: "{{ ansible_play_hosts }}"
- name: current cluster node IP address
set_fact:
cluster_current_node_ip: "{{ hostvars[inventory_hostname]['ansible_' + cluster_interface]['ipv4']['address'] }}"
- name: ping all cluster nodes
command: ping -c 1 {{ item }}
with_items: "{{ cluster_nodes_ips }}"
changed_when: false

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

Resources