I've written a small playbook to run the sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product command and write the output in a result file by usign the following code as a .yml:
# Check if server is vmware
---
- name: Check if server is vmware
hosts: all
become: yes
#ignore_errors: yes
gather_facts: False
serial: 50
#become_flags: -i
tasks:
- name: Run uptime command
#become: yes
shell: "sudo /usr/sbin/dmidecode -t1 | grep -i vmware | grep -i product"
register: upcmd
- debug:
msg: "{{ upcmd.stdout }}"
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ inventory_hostname }};{{ upcmd.stdout }}"
delegate_to: localhost
#when: upcmd.stdout != ""
When running the playbook against a list of hosts I get different weird results so even if the debug shows the correct output, when I check the /home/myuser/ansible/mine/vmware.out file I see only part of them being present. Even weirder is that if I run the playbook again, I will correctly populate the whole list but only if I run this twice. I have repeated this several times with some minor tweaks but not getting the expected result. Doing -v or -vv shows nothing unusual.
You are writing to the same file in parallel on localhost. I suspect you're hitting a write concurrency issue. Try the following and see if it fixes your problem:
- name: write to file
lineinfile:
path: /home/myuser/ansible/mine/vmware.out
create: yes
line: "{{ host }};{{ hostvars[host].upcmd.stdout }}"
delegate_to: localhost
run_once: true
loop: "{{ ansible_play_hosts }}"
loop_control:
loop_var: host
From your described case I understand that you like to find out "How to check if a server is virtual?"
The information will already be collected by the setup module.
---
- hosts: linux_host
become: false
gather_facts: true
tasks:
- name: Show Gathered Facts
debug:
msg: "{{ ansible_facts }}"
For an under MS Hyper-V virtualized Linux system, the output could contain
...
bios_version: Hyper-V UEFI Release v1.0
...
system_vendor: Microsoft Corporation
uptime_seconds: 2908494
...
userspace_architecture: x86_64
userspace_bits: '64'
virtualization_role: guest
virtualization_type: VirtualPC
and having already the uptime in seconds included
uptime
... up 33 days ...
For just only a virtual check one could gather_subset resulting into a full output of
gather_subset:
- '!all'
- '!min'
- virtual
module_setup: true
virtualization_role: guest
virtualization_type: VirtualPC
By Caching facts
... you have access to variables and information about all hosts even when you are only managing a small number of servers
on your Ansible Control Node. In ansible.cfg you can configure where and how they are stored and for how long.
fact_caching = yaml
fact_caching_connection = /tmp/ansible/facts_cache
fact_caching_timeout = 86400 # seconds
This would be a minimal and simple solution without re-implementing functionality which is already there.
Further Documentation and Q&A
Ansible facts
What is the exact list of Ansible setup min?
Related
I am trying to query installed RPM packages on every server, and export to the JSON file.
So far I have managed to export one JSON file for each server. Server name is the JSON file name.
---
- hosts: all
become: yes
become_method: sudo
tasks:
- name: RPM packages query
shell: rpm -qa --queryformat '%{NAME} %{VERSION}\n'
register: query
- name: list report
debug:
var: query.stdout_lines
- local_action: copy content="{{ query.stdout_lines }}" dest="/home/user_name/{{ ansible_hostname }}.json"
Output, where the server name is the file name - host_name1.json:
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
"open-vm-tools-desktop 11.0.5",
"libreport-python 2.1.11",
]
Question: how do I export the entire query.stdout_lines result from all servers into the one file, including the host name, for example:
[**host_name1_here**
]
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
"open-vm-tools-desktop 11.0.5",
"libreport-python 2.1.11",
]
[**host_name2_here**
]
[
"spice-glib 0.35",
"keyutils-libs-devel 1.5.8",
"mapserver 1.0",
"perl-HTTP-Cookies 6.01",
]
Or something like that.
Or is there another, perhaps better way to achieve this?
- hosts: all
become: no
vars:
output_file: details.csv
tasks:
- block:
- name: RPM packages query
shell: rpm -qa --queryformat '%{NAME} %{VERSION}\n'
register: rpmdata
changed_when: false
- name: Empty existing file if already exists
copy:
dest: "{{ output_file }}"
content: 'Details'
run_once: yes
delegate_to: localhost
- name: Add details to file
lineinfile:
path: "{{ output_file }}"
line: "[{{ inventory_hostname }}]\n\
[{{ rpmdata.stdout }}]\n\n"
delegate_to: localhost
A number of changes had to be made to your original code.
Note the below:
Using shell module means running any command leaves Ansible changed status to true for that task. Given your task does not make any changes, I have made the change flag to false
The second task should be to clean an existing output file or create file if not present.
lineinfile module can be used to append to a file.
As you may have noticed the delegate_to will run 2, 3 on localhost which remain a host common to other web servers or the host machine where Ansible is being run from.
Ansible version - 2.9
Facing issue in writing output to a csv file, its not writing the output consistently into the file.
Having an inventory file with three server IPs, script will execute command to check the disk space of each server and writing the output to a csv file.
Sometimes its writing all the three server details into the file, sometimes its writing only one or two server details into the file.
- hosts: localhost
connection: local
gather_facts: False
vars:
filext: ".csv"
tasks:
- name: get the username running the deploy
local_action: command whoami
register: username_on_the_host
- name: get current dir
local_action: command pwd
register: current_dir
- name: create dir
file: path={{ current_dir.stdout }}/HCT state=directory
- name: Set file path here
set_fact:
file_path: "{{ current_dir.stdout }}/HCT/HCT_check{{ filext }}"
- name: Creates file
file: path={{ file_path }} state=touch
# Writing to a csv file
- hosts:
- masters
become: false
vars:
disk_space: "Able to get disk space for the CM {{ hostname }} "
disk_space_error: "The server {{ hostname }} is down for some reason. Please check manually."
disk_space_run_status: "{{disk_space}}"
cur_date: "{{ansible_date_time.iso8601}}"
tasks:
- name: runnig command to get file system which are occupied
command: bash -c "df -h | awk '$5>20'"
register: disk_space_output
changed_when: false
ignore_errors: True
no_log: True
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostname }}, {{ ip_address }}, {{ cur_date }}"
insertafter: EOF
state: present
delegate_to: localhost
Please help to resolve this issue.
The issue is that the task "Log the task get list of file systems with space occupied" is executed in parallel for the 3 servers, so you're having concurrent writing problems.
One solution is to use the serial keyword at play level with a value of 1, this way, all the tasks will be executed for each server one at a time.
- hosts:
- masters
become: false
serial: 1
vars:
[...]
Another solution is to have the task executed for only 1 server but looping over the results of all servers by using hostvars:
- name: Log the task get list of file systems with space occupied
lineinfile:
dest: "{{ hostvars['localhost']['file_path'] }}"
line: "File system occupying disk space, {{ hostvars[item].hostname }}, {{ hostvars[item].ip_address }}, {{ hostvars[item].cur_date }}"
insertafter: EOF
state: present
run_once: True
loop: "{{ ansible_play_hosts }}" # Looping over all hosts of the play
delegate_to: localhost
I'm trying to create a playbook that has to complete the following tasks:
retrieve hostnames and releases from a file file
connect to those hostnames one by one
retrieve the contents of another file another_file in each host that will give us the environment (dev, qa, prod)
so my first task is to retrieve the names of the hosts I need to connect to
- name: retrieve nodes
shell: cat file | grep ";;" | grep foo | grep -v "bar" | sort | uniq
register: nodes
- adding nodes:
add_host:
name: "{{ item }}"
group: "servers"
with_items: "{{ nodes.stdout_lines }}"
the next task connects to the hosts
- hosts: "{{ nodes }}"
become: yes
become_user: user1
become_method: sudo
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
but it doesn't work, when I add verbosity I still see the task being executed with the user running the playbook (local_user)
<node> ESTABLISH SSH CONNECTION FOR USER: local_user
what am I doing wrong?
ansible version is 2.7.1 over rhel 6.1
UPDATE
I've also used remote_user: user1
- hosts: "{{ nodes }}"
remote_user: user1
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
no luck so far, same error
You need to set remote_user, which is the one that controls what username is used when connecting to the server. Only after the connection is established, become_user is used.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
if you use become_user it is like using sudo command on the remote machine.
You want to connect to your machines as a specific user. And after the connection is established issue the commands with sudo.
This question has some answers that should help you.
Is there any command in Ansible to collect the hostnames of hosts with OS Debain?
The file hosts contains no groups!
So a simple command to see the hostnames of hosts containing Debain.
The setup module, which is implicitly called in any playbook via the gather_facts mechanism, contains some output that you could use.
Look for example for the ansible_distribution fact, which should contain Debian on the hosts you are looking for.
If all you want is that list once, you could invoke the module directly, using the ansible command and grep:
ansible all -m setup -a 'filter=ansible_distribution' | grep Debian
If you want to use that information dynamically in a playbook, you could use this pattern:
---
- hosts: all
tasks:
- name: Do something on Debian
debug:
msg: I'm a Debian host
when: ansible_distribution == 'Debian'
I would add that if you want to target distribution and minimum version (often the case), you could do that with a dictionary. I often use this snippet:
---
- hosts: localhost
vars:
minimum_required_version:
CentOS: 7
Debian: 11
Fedora: 31
RHEL7: 7
Ubuntu: 21.10
- name: set boolean fact has_minimum_required_version
set_fact:
has_minimum_required_version: "{{ ansible_distribution_version is version(minimum_required_version[ansible_distribution], '>=') | bool }}"
tasks:
- name: run task when host has minimum version required
debug:
msg: "{{ ansible_distribution }} {{ ansible_distribution_version }} >= {{ minimum_required_version[ansible_distribution] }}"
when: has_minimum_required_version
You could also dynamically include a playbook fragment that targets specific distributions or versions like so:
- name: Include task list in play for specific distros
include_tasks: "{{ ansible_distribution }}.yaml"
Which assumes you'd have Debian.yaml, Ubuntu.yaml, etc. defined for each distro you want to support.
I have 3 remote VMs and 1 ansible node.
I am getting the hostname of some VMs by running hostname command on those remote VMs through ansible shell module and registering that output in hostname_output variable.
Then I want to add those VM's IP (collected using gather_facts: True, {{ ansible_default_ipv4.address }} ) with their hostname and append it to a file temp_hostname on localhost, hence I am delegating the task to localhost.
But the issue is, when I see on console, the lineinfile module says that line has been added when the module executed for each node and delegated to localhost, but when I check the file on the localhost, only 1 entry is shown on localhost instead of 3.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I even tried with copy module as specified in Ansible writing output from multiple task to a single file , but that also gave same result i.e 1 entry only.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
copy:
content: "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}"
dest: /volume200gb/sushil/test/code_hostname/temp_hostname
delegate_to: 127.0.0.1
Finally when I used shell module with redirection operator, it worked as I wanted i.e 3 entries in file on localhost.
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: writing hostname_output in ansible node in file on ansible node
shell: echo -e "{{ ansible_default_ipv4.address }} {{ hostname_output.stdout }}" >> temp_hostname
delegate_to: 127.0.0.1
I am calling this ansible-playbook get_hostname.yml using command:
ansible-playbook -i hosts get_hostname.yml --ssh-extra-args="-o StrictHostKeyChecking=no" --extra-vars "remote_user=cloud-user" -vvv
My hosts file is:
10.194.11.86 private_key_file=/root/.ssh/id_rsa
10.194.11.87 private_key_file=/root/.ssh/id_rsa
10.194.11.88 private_key_file=/root/.ssh/id_rsa
I am using ansible 2.1.0.0
I am using default ansible.cfg only, no modications
My question is why lineinfile and copy module didn't work? Did I miss anything or wrote something wrongly
I tried to reproduce your issue and it did not happen for me, I suspect this is a problem with your version of ansible, try with the latest.
That being said, I think you might be able to make it work using serial: 1, it is probably an issue with file locking that I don't see happening in ansible 2.3. I also think that instead of using a shell task to gather the hostname you could use the ansible_hostname variable which is provided as an ansible fact, and you can also avoid gathering ALL facts if all you want is the hostname by adding a task for that specifically. In the end, it would look like this:
---
- name: get hostnames of dynamically created VMs
hosts: all
serial: 1
remote_user: "{{ remote_user }}"
tasks:
- name: Get hostnames
setup:
filter: ansible_hostname
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ ansible_default_ipv4.address }} {{ ansible_hostname }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
I get inconsistent results using your first code block with lineinfile. Sometimes I get all 3 IPs and hostnames in the destination file and sometimes I only get 2. I'm not sure why this is happening but my guess is that Ansible is trying to save changes to the file at the same time and only one change gets picked up.
The second code block won't work since copy will overwrite the file unless content matches what is already there. The last host that runs will be the only IP/hostname in the destination file.
To work around this, you can loop over your play_hosts (the active hosts in the current play) and reference their variables using hostvars.
- name: writing hostname_output in ansible node in file on ansible node
lineinfile:
line: "{{ hostvars[item]['ansible_default_ipv4'].address }} {{ hostvars[item]['hostname_output'].stdout }}"
dest: temp_hostname
state: present
delegate_to: 127.0.0.1
run_once: True
with_items: "{{ play_hosts }}"
Or you can use a template with the same logic
- name: writing hostname_output in ansible node in file on ansible node
template:
src: IP_hostname.j2
dest: temp_hostname
delegate_to: 127.0.0.1
run_once: True
IP_hostname.j2
{% for host in play_hosts %}
{{ hostvars[host]['ansible_default_ipv4'].address }} {{ hostvars[host]['hostname_output'].stdout }}
{% endfor %}
The problem is here that there is multiple concurrent writes to only one file. That leads to unexpected results:
A solution for that is to use serial: 1 on your play, which forces non-parallel execution among your hosts.
But it can be a performance killer depending on the number of hosts.
I would suggest using another solution: instead of writing to only one file, each host delegation could write on its own file (here using the inventory_hostname value). Therefore, it will have no more concurrent writes.
After that, you can use the module assemble to merge all the file in one. Here is an example (untested):
---
- name: get hostnames of dynamically created VMs
hosts: all
remote_user: "{{ remote_user }}"
gather_facts: True
tasks:
- name: save hostname in variable, as this command is executed remotely, and we want the value on the ansible node
shell: hostname
register: hostname_output
- name: deleting tmp folder
file: path=/tmp/temp_hostname state=absent
delegate_to: 127.0.0.1
run_once: true
- name: create tmp folder
file: path=/tmp/temp_hostname state=directory
delegate_to: 127.0.0.1
run_once: true
- name: writing hostname_output in ansible node in file on ansible node
template: path=tpl.j2 dest=/tmp/temp_hostname/{{ inventory_hostname }}
delegate_to: 127.0.0.1
- name: assemble hostnames
assemble: src=/tmp/temp_hostname/ dest=temp_hostname
delegate_to: '{{ base_rundeck_server }}'
run_once: true
Obviously you have to create the tpl.j2 file.