Ansible - Issue in writing output to a csv file - ansible

Ansible version - 2.9
Facing issue in writing output to a csv file, its not writing the output consistently into the file.
Having an inventory file with three server IPs, script will execute command to check the disk space of each server and writing the output to a csv file.
Sometimes its writing all the three server details into the file, sometimes its writing only one or two server details into the file.
- hosts: localhost
connection: local
gather_facts: False
vars:
filext: ".csv"
tasks:
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- name: get current dir
local_action: command pwd
register: current_dir
- name: create dir
file: path={{ current_dir.stdout }}/HCT state=directory
- name: Set file path here
set_fact:
file_path: "{{ current_dir.stdout }}/HCT/HCT_check{{ filext }}"
- name: Creates file
file: path={{ file_path }} state=touch
# Writing to a csv file
- hosts:
- masters
become: false
vars:
disk_space: "Able to get disk space for the CM {{ hostname }} "
disk_space_error: "The server {{ hostname }} is down for some reason. Please check manually."
disk_space_run_status: "{{disk_space}}"
cur_date: "{{ansible_date_time.iso8601}}"
tasks:
- name: runnig command to get file system which are occupied
command: bash -c "df -h | awk '$5>20'"
register: disk_space_output
changed_when: false
ignore_errors: True
no_log: True
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostname }}, {{ ip_address }}, {{ cur_date }}"
insertafter: EOF
state: present
delegate_to: localhost
Please help to resolve this issue.

The issue is that the task "Log the task get list of file systems with space occupied" is executed in parallel for the 3 servers, so you're having concurrent writing problems.
One solution is to use the serial keyword at play level with a value of 1, this way, all the tasks will be executed for each server one at a time.
- hosts:
- masters
become: false
serial: 1
vars:
[...]
Another solution is to have the task executed for only 1 server but looping over the results of all servers by using hostvars:
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostvars[item].hostname }}, {{ hostvars[item].ip_address }}, {{ hostvars[item].cur_date }}"
insertafter: EOF
state: present
run_once: True
loop: "{{ ansible_play_hosts }}" # Looping over all hosts of the play
delegate_to: localhost

Related

Ansible: How to Define Variables in Playbook Which Do Not Change per Host

How to set variables in an Ansible playbook which do not change per host?
Per S.O.P. before posting, I read the Ansible docs on Using Variables, etc., and of course searched Stack Overflow, and the Internet for possible answers. What I've seen discussed was where to define variables, but not how to set variables in a playbook which do not change with each host in the inventory.
I have an Ansible playbook where variables are set from Ansible-facts.
The variables are used to create a string with the current date and time, which is used to as the filename for a log.
e.g. HealthCheckReport-YYYY-MM-DD_HHMM.txt
A time stamped file is created, then the results from the command run for each server is written to this file.
If the time (minutes) changes while the play is still iterating through the inventory, the variable changes, throwing a "path does not exist" error for each of the remaining hosts.
The example below is an Ansible playbook which runs the nslookup command for the hosts listed in the default inventory file.
Set and concatenate variables
Create a file with a time stamped filename (The OS is SuSe Linux)
Run the nslookup command on hosts in the inventory file
Write the command results to the time stamped file
---
- name: Output to Filename with Timestamp
hosts: healthchecks
connection: local
gather_facts: yes
strategy: linear
order: inventory
vars:
report_filename_prefix: "HealthCheckResults-"
report_date_time: "{{ ansible_date_time.date }}_{{ ansible_date_time.hour }}{{ ansible_date_time.minute }}"
report_filename_date: "{{ report_filename_prefix }}{{ report_date_time }}.txt"
report_path: "/reports/healthchecks/{{ report_filename_date }}"
tasks:
- name: Create file with timestamped filename
delegate_to: localhost
lineinfile:
path: "{{ report_path }}"
create: yes
line: "Start: Health Check Report\n{{ report_path }}"
run_once: true
- name: Run nslookup command
delegate_to: localhost
throttle: 1
command: nslookup {{ inventory_hostname }}
register: nslookup_result
- name: Append nslookup results to a file
delegate_to: localhost
throttle: 1
blockinfile:
state: present
insertafter: EOF
dest: "{{ report_path }}"
marker: "- - - - - - - - - - - - - - - - - - - - -"
block: |
Server: {{ inventory_hostname }}
Environment: {{ environmentz }}
{{ nslookup_result.stdout_lines.3 }}
{{ nslookup_result.stdout_lines.4 }}

Ansible: Is there a way to look into a file, split the content in the file based on a specific criteria & then copy that content to another file?

I'm new to Ansible & I've been trying to read the content of a file, split it based on a specific criteria & then I want to copy that content or return that content.
for example, a file sample.txt contains:
userid= "abc"
I want to read the content in sample.txt & split whereever there's a '=' sign, so that I can extract the creds (userid & abc) & then use it further.
I'm dropping drafts of the code snippets I've tried.
---
- name: extracting creds
hosts: servers
tasks:
- name: read secure value
lineinfile:
path: /home/usr/Desktop/sample.txt
register: creds
debug:
msg: "{{ creds.split('=') }}"
Another code I tried:
---
- name: Creds
hosts: servers
vars:
test: /home/usr/Desktop/sample.txt
tasks:
- debug:
msg: "{{lookup('file', test).split('=') }}"
None of them works :( What shall be followed to get it done?
You can also try the following approach to read the contents from file and split them.
---
- hosts: localhost
tasks:
- name: add host
add_host:
hostname: "{{ server1 }}"
groups: host1
- hosts: host1
become: yes
tasks:
- name: Fetch the sample file
slurp:
src: /tmp/sample.txt
register: var1
- name: extract content for matching pattern
set_fact:
sample_var1: "{{ var1['content'] | b64decode | regex_findall ('(.+=.+)', multiline=True, ignorecase=True) }}"
- debug:
msg: "{{ item.split('=')[1] }}"
loop: "{{ sample_var1 }}"
According to ansible doc, this is what lineinfile does. So, if you want to modify some content from one file and write to another file then this module wouldn't help.
This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression. This is
primarily useful when you want to change a single line in a file
only.
lookup on the other hand works on control machine. Judging by the code you have added, may be you were trying to use the file on target host. So, lookup wouldn't help either.
If the file is available on local/control host then read file, split content and copy to another file on the control machine and then copy the final file to the target host using copy module. Here is a sample that reads a file from control host and split every line using = as a separator.
- hosts: localhost
tasks:
- debug:
msg: "{{ item.split('=') }}"
with_lines: "cat /home/usr/Desktop/sample.txt"
If the file is on remote/managed host then you can use something like below:
- hosts: servers
tasks:
- command: "cat /home/usr/Desktop/sample.txt"
register: content
- debug:
msg: "{{ item.split('=') }}"
loop: "{{ content.stdout_lines }}"

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

lineinfile ansible module skips a line

I have a need to know the index of host names in the inventory. I am using the below code to create a variable file that I can use in a subsequent play book
- name: Debug me
hosts: hosts
tasks:
- debug: msg="{{ inventory_hostname }}"
- debug: msg="{{ play_hosts.index(inventory_hostname) }}"
- local_action: 'lineinfile create=yes dest=/tmp/test.conf
line="host{{ play_hosts.index(inventory_hostname) }}=
{{ inventory_hostname }}"'
I have the following inventory file
[hosts]
my.host1.com
my.host2.com
Now when I run this, the test.conf that gets generated under /tmp sometimes has both hostnames like this
host1= my.host2.com
host0= my.host1.com
when I run the same playbook a few times each time emptying the test.conf before running. quite a few times the file only has one entry
host1= my.host2.com
or
host0= my.host1.com
how come the same ansible playbook behaving differently?
I believe the issue is your running two threads against different hosts, and using local_action is not thread safe.
Try using the serial keyword:
- name: Debug me
hosts: hosts
serial: 1
tasks:
- debug: msg="{{ inventory_hostname }}"
- debug: msg="{{ play_hosts.index(inventory_hostname) }}"
- local_action: 'lineinfile create=yes dest=/tmp/test.conf
line="host{{ play_hosts.index(inventory_hostname) }}=
{{ inventory_hostname }}"'
Edit: A better way to do this if just trying to operate on the list of host in inventory on the localhost, would be to avoid doing the action on the host and using local_action in the first place.
- name: Debug me
hosts: localhost
tasks:
- lineinfile:
create: yes
dest: /tmp/test.conf
line: "host{{ groups['hosts'].index(item)}}={{ item }}"
with_items: " {{ groups['hosts'] }}"
This will get you the results you desire. Then you can add another play to do operations against the hosts themselves.
The solution I am trying to avoid problems with race conditions with non-thread safe Local_action: lineinfile to write gathered data to local file. Split it across 2 different plays in the same file.
eg:
- name: gather_date
hosts: all
any_errors_fatal: false
gather_facts: no
tasks:
- name: get_Aptus_device_count_list
shell: gather_data.sh
become: true
register: Aptus_device_count_list
changed_when: false
- name: Log_gathered_date
hosts: all
any_errors_fatal: false
gather_facts: no
tasks:
- name: log_gathered_info
local_action:
module: lineinfile
dest: /home/rms-mit/MyAnsible/record_Device_count_collection.out
line: "\n--- {{ inventory_hostname }} --- \n
{{ Aptus_device_count_list.stdout }} \n.\n---\n"
changed_when: false

Ansible: register content not available for other host

Below is a part of a playbook in Ansible 2.1:
- hosts: localhost
any_errors_fatal: true
tasks:
- name: Bla Bla
file: path=/var/tmp/somedir state=directory
#ignore_errors: no
- name: Create directory for every host
file: path=/var/tmp/somedir/{{ item }} state=directory
with_items: "{{ groups['XYZ'] }}"
- name: Get File contents of NewFile
shell: cat NewFile.txt executable=/bin/bash
register: file_contents
- hosts: XYZ
#any_errors_fatal: true
vars:
num_hosts: "{{ groups['XYZ'] | length }}"
serial: num_hosts
tasks:
- name: Copy files to corresponding directories
vars:
path: /var/tmp/somedir/{{ item[0] }}
synchronize: mode=pull src={{ item[1] }} dest={{ path }}
with_nested:
- "{{ groups['XYZ'] }}"
- with_lines: cat NewFile.txt
This does not work.
Now the problem is i am not able to reference file_contents which has been registered under localhost and Ansible is not supporting to cat the NewFile from the hosts: XYZ
Is there any way to do this in some simple manner? I need to check contents of the NewFile in this playbook only and then use the same to copy files from remote to local.
As mentioned in the comments, facts (or all variables) are stored on a host basis. If you have registered a values from a task running on localhost, you can access it from any task running in context of other hosts through the global hostvars dict. All hosts and their facts are stored in there:
hostvars['localhost']['file_contents']
I am not entirely sure simply registered variables are available in the hostvars dict. If not, you have to use set_fact in the first play to store it as a fact.

Resources