I would like to use the lineinfile module to automatically fill an inventory file (inventory/hosts) from another inventory file that I use to deploy VMs on vsphere.
When the file is empty, there is no problem but, when it is already filled, the module does not insert the lines when it sees that they exist elsewhere in the file. But I need it to build my inventory file.
How I can do this?
Here is my piece of code:
- name: Add parameters for connection of VMs Windows
ansible.builtin.lineinfile:
path: /home/ansible/inventory/hosts
line: "{{ item }}"
with_items:
- '[{{ group_names[0] }}:vars]'
- ansible_user=administrator
- ansible_password='{{ lookup('file', '/home/ansible/directory/file_'+group_names[0] ) }}'
- ansible_connection=winrm
- ansible_port=5986
- ansible_winrm_server_cert_validation=ignore
delegate_to: localhost
It only inserts the ansible_password line for me, since the others exist for other groups of VMs. I also tried with blockinfile
Related
This is my inventory:
[servers1]
ubuntu-vm1
ubuntu-vm2
ubuntu-vm3
[servers2]
centos-vm1
centos-vm2
centos-vm3
What my playbook does is it checks if the server needs to be rebooted (I got that part figured out so I won't post it here, it's part of a variable reboot_stat). Now I want it to add ansible_hostname (name of the server in inventory) line in a file on my localhost when the condition is met that server needs to be rebooted.
This is how it looks like now:
- name: add to file what server needs a reboot
lineinfile:
path: /root/reboot-servers
line: '{{ inventory_hostname }}'
delegate_to: localhost
when: reboot_stat.stat.exists
Lets say that the root/reboot-servers file looks like this:
[servers1]
[servers2]
And lets say only ubuntu-vm1 and centos-vm2 need a reboot. What I want is when I run the playbook that it appends the line under its hostname variable so that the file looks like this:
[servers1]
ubuntu-vm1
[servers2]
centos-vm2
Edit: Modified for O.P.'s comment.
Try adding the insertafter option in the task to tell lineinfile where to put the hostname:
- name: add to file what server needs a reboot
lineinfile:
path: /root/reboot-servers
line: '{{ inventory_hostname }}'
# insertafter: "{{ 'servers1' if inventory_host in groups['servers1'] else 'servers2' }}"
insertafter: "{{ primary_group_tag }}"
delegate_to: localhost
when: reboot_stat.stat.exists
Add the following to your inventory file to set the variable for each group:
[servers1:vars]
primary_group_tag="[servers1]"
[servers2:vars]
primary_group_tag="[servers2]"
I want to restrict an Ansible play to a specific host
Here's a cut down version of what I want:
- hosts some_host_group
tasks:
- name: Remove existing server files
hosts: 127.0.0.1
file:
dest: /tmp/test_file
state: present
- name: DO some other stuff
file:
...
I want to (as an early task), remove a local directory (I've created a file in the example as it's a more easily observed test). I was under the impression that I could limit a play to a set of hosts with the "hosts" parameter to the task -
but I get this error:
ERROR! 'hosts' is not a valid attribute for a Task
$ansible --version
ansible 2.3.1.0
Thanks.
PS I could wrap the ansible in a shell fragment, but that's ugly.
You should use delegate_to or local_action and tell Ansible to run the task only once (otherwise it will try to delete the directory as many times as target hosts in your play, although it won't be a problem).
You should also use absent not present if you want to remove directory, as you stated.
- name: Remove existing server files
delegate_to: 127.0.0.1
run_once: true
file:
dest: /tmp/test_file
state: absent
There are syntax errors in your playbook, have a look at Ansible Intro, Local Playbooks and Delegation.
- hosts: some_host_group
tasks:
- name: Remove existing server files
- hosts: localhost
tasks:
- file:
dest: /tmp/test_file
state: present
- name: DO some other stuff
file:
I have trivial task to fill particular ansible inventory file with the new deployed VMs names under their roles.
Here is my playbook:
- hosts: 127.0.0.1
tasks:
- name: Add host to inventory file
lineinfile:
dest: "{{inv_path}}"
regexp: '^\[{{role}}\]'
insertafter: '^#\[{{role}}\]'
line: "{{new_host}}"
Instead of adding new line after [role], ansible replace the string with the hostname. My aim is to naturally insert after it. If there is no matched role it should skip add new line.
How to achieve this? Thanks!
[role] is getting replaced because the regexp parameter is the regular expression Ansible will look for to replace with the line parameter. Just move that value to insertafter and get rid of regexp.
As for the other requirement you will need another task that will check if [role] exists first. Then your lineinfile task will evaluate the result of the checker task to determine if it should run.
- name: Check if role is in inventory file
shell: grep [{{ role }}] {{ inv_path }}
ignore_errors: True
register: check_inv
- name: Add host to inventory file
lineinfile:
dest: "{{ inv_path }}"
insertafter: '^\[{{role}}\]$'
line: "{{new_host}}"
when: check_inv.rc == 0
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.
In order to perform some operations locally (not on the remote machine), I need to put the content of an ansible variable inside a temporary file.
Please note that I am looking for a solution that takes care of generating the temporary file to a location where it can be written (no hardcoded names) and also that takes care of the removal of the file as we do not want to leave things behind.
You should be able to use the tempfile module, followed by either the copy or template modules. Like so:
- hosts: localhost
tasks:
# Create a file named ansible.{random}.config
- tempfile:
state: file
suffix: config
register: temp_config
# Render template content to it
- template:
src: templates/configfile.j2
dest: "{{ temp_config.path }}"
vars:
username: admin
Or if you're running it in a role:
- tempfile:
state: file
suffix: config
register: temp_config
- copy:
content: "{{ lookup('template', 'configfile.j2') }}"
dest: "{{ temp_config.path }}"
vars:
username: admin
Then just pass temp_config.path to whatever module you need to pass the file to.
It's not a great solution, but the alternative is writing a custom module to do it in one step.
Rather than do it with a file, why not just use the environment? This wan you can easily work with the variable and it will be alive through the ansible session and you can easily retrieve it in any steps or outside of them.
Although using the shell/application environment is probably, if you specifically want to use a file to store variable data you could do something like this
- hosts: server1
tasks:
- shell: cat /etc/file.txt
register: some_data
- local_action: copy dest=/tmp/foo.txt content="{{some_data.stdout}}"
- hosts: localhost
tasks:
- set_fact: some_data="{{ lookup('file', '/tmp/foo.txt') }}"
- debug: var=some_data
As for your requirement to give the file a unique name and clean it up at the end of the play. I'll leave that implementation to you