I have a CSV file that has the host name with corresponding IP address in it. I am trying to write an ansible playbook using the lineinfile = command with a variable that will read the CSV file and place the hostname of the corresponding ipaddress on the host with the ipaddress. I don't know if this is the way to go. I want to run it with the playbook addressed to all host.
If you need to connect to all hosts in your csv file and set hostname to corresponding name value from that file, this will do:
---
- hosts: localhost
tasks:
- add_host: name="{{ item.split(',')[1] | trim }}" ansible_host="{{ item.split(',')[0] }}" group=csv
with_lines: cat host-ip.csv
- hosts: csv
tasks:
- hostname: name="{{ inventory_hostname }}"
Related
I'm trying to assign hostname to a few hosts, gathering it from inventory.
My inventory is:
[masters]
master.domain.tld
[workers]
worker1.domain.tld
worker2.domain.tld
[all:vars]
ansible_user=root
ansible_ssh_private_key_file=~/.ssh/id_rsa
The code I'm using is:
- name: Set hostname
shell: hostnamectl set-hostname {{ item }}
with_items: "{{ groups['all'] }}"
Unfortunately, code iterate all items (both IPs and hostnames) and assigns to all 3 hosts the last item...
Any help would be much appreciated.
Don't loop: this is going through all your hosts on each target. Use only the natural inventory loop and the inventory_hostname special var.
Moreover, don't use shell when there is a dedicated module
- name: Assign hostname
hosts: all
gather_facts: false
tasks:
- name: Set hostname for target
hostname:
name: "{{ inventory_hostname }}"
I am adding a host using add_host
- hosts: localhost
tasks:
- set_fact:
ssh_key: "{{ lookup('file','jenkins.pem')| replace('\n','') }}"
- add_host:
hostname: tower
ansible_ssh_host: x.x.x.x
ansible_ssh_user: jenkins
ansible_ssh_private_key_file: "{{ ssh_key }}"
- command: ls /tmp
delegate_to: tower
The above works fine, if I specify a file path for ansible_ssh_private_key_file. But I want to read the ansible_ssh_private_key_file from a variable, it is not working. Is there any way to achieve this.
You have a misunderstanding: lookup("file") returns the contents of a file, but -- as its name is a giveaway -- ansible_ssh_private_key_file wants the path on the controller node to the ssh private key file, not its contents.
You can use the lookup("fileglob", "jenkins.pem", wantlist=True) | first to obtain the path to that jenkins.pem file, if you don't already have it, since I believe ansible wants that ssh_private_key_file path as fully qualified.
I have my hosts in an inventory file as below:
cnamgw01b ansible_ssh_host=172.17.0.26
cnamgw01a ansible_ssh_host=172.17.1.26
cnamgw02b ansible_ssh_host=172.17.0.23
cnamgw02a ansible_ssh_host=172.17.1.23
cnamgw03a ansible_ssh_host=172.17.1.13
cnamgw03b ansible_ssh_host=172.17.0.13
These are new builds and I would like to set the hostname based on the inventory file. I already have a script in place that updated the inventory file as new VM's are turned up and assigns a random hostname. I would like to take this hostname assigned and set it as the hosts hostname. How can I accomplish this? Also note that I also use folders to subdivide the hosts by region
You can use the ansible module hostname to set hostname.
https://docs.ansible.com/ansible/latest/modules/hostname_module.html
- hosts: all
tasks:
- name: Set hostname
hostname:
name: {{ inventory_hostname }}
You could run something like this to set the system hostname to the inventory hostname:
- hosts: all
tasks:
- name: set system hostname
command: hostnamectl set-hostname {{ inventory_hostname }}
That is, the variable inventory_hostname holds the name of the current host as it was named in your inventory.
This task assumes you have the hostnamectl command available. You could instead write the value of inventory_hostname to /etc/hostname and call the hostname command separately.
I am trying to use ansible to modify the hostnames of a dozen newly created Virtual Machines, and I am failing to understand how to loop correctly.
Here is the playbook I've written:
---
- hosts: k8s_kvm_vms
tasks:
- name: "update hostnames"
hostname:
name: "{{ item }}"
with_items:
- centos01
- centos02
...
The result is that it updates each host with each hostname. So if I have 12 machines, each hostname would be "centos12" at the end of the playbook.
I expect this behavior to essentially produce the same result as:
num=0
for ip in ${list_of_ips[*]}; do
ssh $ip hostnamectl set-hostname centos${num}
num=$((num+1))
done
If I were to write it myself in bash
The answer on this page leads me to believe I would have to include all of the IP addresses in my playbook. In my opinion, the advantage of scripting would be that I could have the same hostnames even if their IP changes (I just have to copy the ip addresses into /etc/ansible/hosts) which I could reuse with other playbooks. I have read the ansible page on the hostname module, but their example leads me to believe I would, instead of using a loop, have to define a task for each individual IP address. If this is the case, why use ansible over a bash script?
ansible hostname module
You can create a new variable for each of the servers in the inventory like
[k8s_kvm_vms]
server1 new_hostname=centos1
server2 new_hostname=centos2
Playbook:
---
- hosts: k8s_kvm_vms
tasks:
- name: "update hostnames"
hostname:
name: "{{ new_hostname }}"
I think you need to sudo to change the hostname thus you should add "become: yes"
---
- hosts: k8s_kvm_vms
become: yes
tasks:
- name: "update hostnames"
hostname:
name: "{{ new_hostname }}"
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.