Ansible - installed RPM packages for multiple servers - ansible

I am trying to query installed RPM packages on every server, and export to the JSON file.
So far I have managed to export one JSON file for each server. Server name is the JSON file name.
---
- hosts: all
become: yes
become_method: sudo
tasks:
- name: RPM packages query
shell: rpm -qa --queryformat '%{NAME} %{VERSION}\n'
register: query
- name: list report
debug:
var: query.stdout_lines
- local_action: copy content="{{ query.stdout_lines }}" dest="/home/user_name/{{ ansible_hostname }}.json"
Output, where the server name is the file name - host_name1.json:
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
"open-vm-tools-desktop 11.0.5",
"libreport-python 2.1.11",
]
Question: how do I export the entire query.stdout_lines result from all servers into the one file, including the host name, for example:
[**host_name1_here**
]
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
"open-vm-tools-desktop 11.0.5",
"libreport-python 2.1.11",
]
[**host_name2_here**
]
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
]
Or something like that.
Or is there another, perhaps better way to achieve this?

- hosts: all
become: no
vars:
output_file: details.csv
tasks:
- block:
- name: RPM packages query
shell: rpm -qa --queryformat '%{NAME} %{VERSION}\n'
register: rpmdata
changed_when: false
- name: Empty existing file if already exists
copy:
dest: "{{ output_file }}"
content: 'Details'
run_once: yes
delegate_to: localhost
- name: Add details to file
lineinfile:
path: "{{ output_file }}"
line: "[{{ inventory_hostname }}]\n\
[{{ rpmdata.stdout }}]\n\n"
delegate_to: localhost
A number of changes had to be made to your original code.
Note the below:
Using shell module means running any command leaves Ansible changed status to true for that task. Given your task does not make any changes, I have made the change flag to false
The second task should be to clean an existing output file or create file if not present.
lineinfile module can be used to append to a file.
As you may have noticed the delegate_to will run 2, 3 on localhost which remain a host common to other web servers or the host machine where Ansible is being run from.

Related

Ansible module lineinfile doesn't properly write all the output

I've written a small playbook to run the sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product command and write the output in a result file by usign the following code as a .yml:
# Check if server is vmware
---
- name: Check if server is vmware
hosts: all
become: yes
#ignore_errors: yes
gather_facts: False
serial: 50
#become_flags: -i
tasks:
- name: Run uptime command
#become: yes
shell: "sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product"
register: upcmd
- debug:
msg: "{{ upcmd.stdout }}"
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ inventory_hostname }};{{ upcmd.stdout }}"
delegate_to: localhost
#when: upcmd.stdout != ""
When running the playbook against a list of hosts I get different weird results so even if the debug shows the correct output, when I check the /home/myuser/ansible/mine/vmware.out file I see only part of them being present. Even weirder is that if I run the playbook again, I will correctly populate the whole list but only if I run this twice. I have repeated this several times with some minor tweaks but not getting the expected result. Doing -v or -vv shows nothing unusual.
You are writing to the same file in parallel on localhost. I suspect you're hitting a write concurrency issue. Try the following and see if it fixes your problem:
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ host }};{{ hostvars[host].upcmd.stdout }}"
delegate_to: localhost
run_once: true
loop: "{{ ansible_play_hosts }}"
loop_control:
loop_var: host
From your described case I understand that you like to find out "How to check if a server is virtual?"
The information will already be collected by the setup module.
---
- hosts: linux_host
become: false
gather_facts: true
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
For an under MS Hyper-V virtualized Linux system, the output could contain
...
bios_version: Hyper-V UEFI Release v1.0
...
system_vendor: Microsoft Corporation
uptime_seconds: 2908494
...
userspace_architecture: x86_64
userspace_bits: '64'
virtualization_role: guest
virtualization_type: VirtualPC
and having already the uptime in seconds included
uptime
... up 33 days ...
For just only a virtual check one could gather_subset resulting into a full output of
gather_subset:
- '!all'
- '!min'
- virtual
module_setup: true
virtualization_role: guest
virtualization_type: VirtualPC
By Caching facts
... you have access to variables and information about all hosts even when you are only managing a small number of servers
on your Ansible Control Node. In ansible.cfg you can configure where and how they are stored and for how long.
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 86400 # seconds
This would be a minimal and simple solution without re-implementing functionality which is already there.
Further Documentation and Q&A
Ansible facts
What is the exact list of Ansible setup min?

What is the equivalent of running ansible -m setup --tree in a playbook?

I had our security group ask for all the information we gather from the hosts we manage with Ansible Tower. I want to run the setup command and put it into a file in a folder I can run ansible-cmdb against. I need to do this in a playbook because we have disabled root login on the hosts and only allow public / private key authentication of the Tower user. The private key is stored in the database so I cannot run the setup command from the cli and impersonate the Tower user.
EDIT I am adding my code so it can be tested elsewhere.
gather_facts.yml
---
- hosts: all
gather_facts: true
become: true
tasks:
- name: Check for temporary dir make it if it does not exist
file:
path: /tmp/ansible
state: directory
mode: 0755
- name: Gather Facts into a file
copy:
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
dest: /tmp/ansible/{{ inventory_hostname }}
cmdb_gather.yml
---
- hosts: all
gather_facts: no
become: true
tasks:
- name: Fetch fact gather files
fetch:
src: /tmp/ansible/{{ inventory_hostname }}
dest: /depot/out/
flat: yes
CLI:
ansible -i devinventory -m setup --tree out/ all
This would basically look like (wrote on spot not tested):
- name: make the equivalent of "ansible somehosts -m setup --tree /some/dir"
hosts: my_hosts
vars:
treebase: /path/to/tree/base
tasks:
- name: Make sure we have a folder for tree base
file:
path: "{{ treebase }}"
state: directory
delegate_to: localhost
run_once: true
- name: Dump facts host by host in our treebase
copy:
dest: "{{ treebase }}/{{ inventory_hostname }}"
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
delegate_to: localhost
Ok I figured it out. While #Zeitounator had a correct answer I had the wrong idea. In checking the documentation for ansible-cmdb I caught that the application can use the fact cache. When I included a custom ansible.cfg in the project folder and added:
[defaults]
fact_caching=jsonfile
fact_caching_connection = /depot/out
ansible-cmdb was able to parse the output correctly using:
ansible-cmdb -f /depot/out > ansible_hosts.html

Ansible - Issue in writing output to a csv file

Ansible version - 2.9
Facing issue in writing output to a csv file, its not writing the output consistently into the file.
Having an inventory file with three server IPs, script will execute command to check the disk space of each server and writing the output to a csv file.
Sometimes its writing all the three server details into the file, sometimes its writing only one or two server details into the file.
- hosts: localhost
connection: local
gather_facts: False
vars:
filext: ".csv"
tasks:
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- name: get current dir
local_action: command pwd
register: current_dir
- name: create dir
file: path={{ current_dir.stdout }}/HCT state=directory
- name: Set file path here
set_fact:
file_path: "{{ current_dir.stdout }}/HCT/HCT_check{{ filext }}"
- name: Creates file
file: path={{ file_path }} state=touch
# Writing to a csv file
- hosts:
- masters
become: false
vars:
disk_space: "Able to get disk space for the CM {{ hostname }} "
disk_space_error: "The server {{ hostname }} is down for some reason. Please check manually."
disk_space_run_status: "{{disk_space}}"
cur_date: "{{ansible_date_time.iso8601}}"
tasks:
- name: runnig command to get file system which are occupied
command: bash -c "df -h | awk '$5>20'"
register: disk_space_output
changed_when: false
ignore_errors: True
no_log: True
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostname }}, {{ ip_address }}, {{ cur_date }}"
insertafter: EOF
state: present
delegate_to: localhost
Please help to resolve this issue.
The issue is that the task "Log the task get list of file systems with space occupied" is executed in parallel for the 3 servers, so you're having concurrent writing problems.
One solution is to use the serial keyword at play level with a value of 1, this way, all the tasks will be executed for each server one at a time.
- hosts:
- masters
become: false
serial: 1
vars:
[...]
Another solution is to have the task executed for only 1 server but looping over the results of all servers by using hostvars:
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostvars[item].hostname }}, {{ hostvars[item].ip_address }}, {{ hostvars[item].cur_date }}"
insertafter: EOF
state: present
run_once: True
loop: "{{ ansible_play_hosts }}" # Looping over all hosts of the play
delegate_to: localhost

lineinfile module of ansible with delegate_to localhost doesn't write all data to localhost, it writes only 1 random entry on localhost

I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.

Ansible writing output from multiple task to a single file

In Ansible, I have written an Yaml playbook that takes list of host name and the executes command for each host. I have registered a variable for these task and at the end of executing a task I append output of each command to a single file.
But every time I try to append to my output file, only the last record is getting persisted.
---
- hosts: list_of_hosts
become_user: some user
vars:
output: []
tasks:
- name: some name
command: some command
register: output
failed_when: "'FAILED' in output"
- debug: msg="{{output | to_nice_json}}"
- local_action: copy content='{{output | to_nice_json}}' dest="/path/to/my/local/file"
I even tried to append using lineinfile using insertafter parameter yet was not successful.
Anything that I am missing?
You can try something like this:
- name: dummy
hosts: myhosts
serial: 1
tasks:
- name: create file
file:
dest: /tmp/foo
state: touch
delegate_to: localhost
- name: run cmd
shell: echo "{{ inventory_hostname }}"
register: op
- name: append
lineinfile:
dest: /tmp/foo
line: "{{ op }}"
insertafter: EOF
delegate_to: localhost
I have used serial: 1 as I am not sure if lineinfile tasks running in parallel will garble the output file.
Ansible doc recommend use copy:
- name: get jstack
shell: "/usr/lib/jvm/java/bin/jstack -l {{PID_JAVA_APP}}"
args:
executable: /bin/bash
register: jstackOut
- name: write jstack
copy:
content: "{{jstackOut.stdout}}"
dest: "tmp/jstack.txt"
If you want write local file, add this:
delegate_to: localhost
Why to complicate things ?
I did this like that and it worked:
ansible-playbook your_playbook.yml >> /file/you/want/to/redirect/output.txt
you can also try some parsing with grep or some other stuff with tee -a.

Resources