how can I edit a remote cfg file or in my case
I must modify the "hostname" by the name of the remote machine,
knowing that it is for automated because aprre I will deploy it on +300 server
I must be able to get the remote hostname and put it in the cfg file with ansible
thanks
############# file for config host ############
---
- hosts: computer_user
remote_user: toto
tasks:
- name: "config zabbix agent"
lineinfile:
path: /etc/zabbix.cfg
regexp: '(.*)hostname_local(.*)'
line: '%hostname%'
########### file_cfg_on_computer_user #########
hostname_local: here_i_want_put_the_hostname_of_my_computer_user_with_a_like_%hostname%
I'm not really sure about what you really want, but if you want to get the hostname of the system where your playbook is running, then you have two possibility :
You can get the value of inventory_hostname : It is the name of the hostname as configured in Ansible’s inventory host file
Or you can get the value of the ansible fact ansible_hostname : this one is discovered during the gathering fact phase
You can find more info about hosts variables and facts here
Ansible defines many special variables at runtime.
You can use {{ inventory_hostname }} to returns the inventory name for the ‘current’ host being iterated over in the play.
Or you can execute a remote command and use result in next task :
---
- hosts: computer_user
remote_user: toto
tasks:
- name: "get hostname from remote host"
shell: hostname
register: result_hostname
- name: "config zabbix agent"
lineinfile:
path: /etc/zabbix.cfg
regexp: '(.*)hostname_local(.*)'
line: '{{ result_hostname.stdout }}'
Related
I'm trying to create a playbook which basically consists 2 hosts init; (don't ask why)
---
- hosts: all
tasks:
- name: get the hostname of machine and save it as a variable
shell: hostname
register: host_name
when: ansible_host == "x.x.x.x" *(will be filled by my application)*
- hosts: "{{ host_name.stdout }}"
tasks:
- name: use the variable as hostname
shell: whoami
I don't have any hostname information in my application so I need to trigger my playbook with an IP address, then i should get the hostname of that machine and save it to a variable to use in my other tasks to avoid "when" command for each task.
The problem is that I'm able to use "host_name" variable in all other fields except "hosts", it gives me an Error like this when i try to run;
ERROR! The field 'hosts' has an invalid value, which includes an undefined variable. The error was: 'host_name' is undefined
Screenshot of the error
By default, Ansible itself gathers some information about a host. This happens at the beginning of a playbook's execution right after PLAY in TASK [Gathering Facts].
This automatic gathering of information about a system can be turned off via gather_facts: no, by default this is active.
This collected information is called Ansible Facts. An example of the collected facts is shown in the Ansible Docs, for your host you can print out all Ansible Facts:
either in the playbook as a task:
- name: Print all available facts
debug:
var: ansible_facts
or via CLI as an adhoc command:
ansible <hostname> -m setup
The Ansible Facts contain values like: ansible_hostname, ansible_fqdn, ansible_domain or even ansible_all_ipv4_addresses. This is the simplest way to act with the hostname of the client.
If you want to output the hostname and IP addresses that Ansible has collected, you can do it with the following tasks for example:
- name: Print hostname
debug:
var: ansible_hostname
- name: Print IP addresses
debug:
var: ansible_all_ipv4_addresses
If you start your playbook for all hosts, you can check the IP address and also stop it directly for the "wrong" clients.
---
- hosts: all
tasks:
- name: terminate execution for wrong hosts
assert:
that: '"x.x.x.x" is in ansible_all_ipv4_addresses'
fail_msg: Terminating because IP did not match
success_msg: "Host matched. Hostname: {{ ansible_hostname }}"
# your task for desired host
I am have a requirement like passing a server hostname at runtime to ansible tower template. the issue i faced is we dont know the hostname before running the template so we cant configure hostname in the ansible tower inventory. I tired googling but haven't found a solution yet.
my main.yaml file starting will be looking like this
- name: Execution server
hosts: <hostname>
gather_facts: false
serial: 1
the erroer i am getting is
1 plays in main.yml
[WARNING]: Could not match supplied host pattern, ignoring: <hostname>
You can use the target_host variable. I like setting a default with this
- name: Execution server
hosts: "{{ target_host | default('all') }} "
gather_facts: false
serial: 1
Define target_host in a job template's limit:
I have my hosts in an inventory file as below:
cnamgw01b ansible_ssh_host=172.17.0.26
cnamgw01a ansible_ssh_host=172.17.1.26
cnamgw02b ansible_ssh_host=172.17.0.23
cnamgw02a ansible_ssh_host=172.17.1.23
cnamgw03a ansible_ssh_host=172.17.1.13
cnamgw03b ansible_ssh_host=172.17.0.13
These are new builds and I would like to set the hostname based on the inventory file. I already have a script in place that updated the inventory file as new VM's are turned up and assigns a random hostname. I would like to take this hostname assigned and set it as the hosts hostname. How can I accomplish this? Also note that I also use folders to subdivide the hosts by region
You can use the ansible module hostname to set hostname.
https://docs.ansible.com/ansible/latest/modules/hostname_module.html
- hosts: all
tasks:
- name: Set hostname
hostname:
name: {{ inventory_hostname }}
You could run something like this to set the system hostname to the inventory hostname:
- hosts: all
tasks:
- name: set system hostname
command: hostnamectl set-hostname {{ inventory_hostname }}
That is, the variable inventory_hostname holds the name of the current host as it was named in your inventory.
This task assumes you have the hostnamectl command available. You could instead write the value of inventory_hostname to /etc/hostname and call the hostname command separately.
I am trying to use ansible to modify the hostnames of a dozen newly created Virtual Machines, and I am failing to understand how to loop correctly.
Here is the playbook I've written:
---
- hosts: k8s_kvm_vms
tasks:
- name: "update hostnames"
hostname:
name: "{{ item }}"
with_items:
- centos01
- centos02
...
The result is that it updates each host with each hostname. So if I have 12 machines, each hostname would be "centos12" at the end of the playbook.
I expect this behavior to essentially produce the same result as:
num=0
for ip in ${list_of_ips[*]}; do
ssh $ip hostnamectl set-hostname centos${num}
num=$((num+1))
done
If I were to write it myself in bash
The answer on this page leads me to believe I would have to include all of the IP addresses in my playbook. In my opinion, the advantage of scripting would be that I could have the same hostnames even if their IP changes (I just have to copy the ip addresses into /etc/ansible/hosts) which I could reuse with other playbooks. I have read the ansible page on the hostname module, but their example leads me to believe I would, instead of using a loop, have to define a task for each individual IP address. If this is the case, why use ansible over a bash script?
ansible hostname module
You can create a new variable for each of the servers in the inventory like
[k8s_kvm_vms]
server1 new_hostname=centos1
server2 new_hostname=centos2
Playbook:
---
- hosts: k8s_kvm_vms
tasks:
- name: "update hostnames"
hostname:
name: "{{ new_hostname }}"
I think you need to sudo to change the hostname thus you should add "become: yes"
---
- hosts: k8s_kvm_vms
become: yes
tasks:
- name: "update hostnames"
hostname:
name: "{{ new_hostname }}"
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.