Ansible Module vmware_host_facts only returning one Host - ansible

I'm playing around with Ansible VMWare Modules and tried to get all the Information from ESXi Hosts from a vCenter.
With the Module vmware_host_facts it should be possible.
But when I run a Playbook with the following configuration, I only get the Information of one Host back - and not all. In this vCenter there are about 20 Hosts.
Playbook:
- name: Gather vmware host facts
vmware_host_facts:
hostname: vCenter_IP
username: username
password: password
register: host_facts
delegate_to: localhost
In the Documentation it tells me, that the hostname can also be a vCenter IP.
Resource:
http://docs.ansible.com/ansible/latest/modules/vmware_host_facts_module.html#vmware-host-facts
Is that module not the correct one to gather all host information from a vCenter? Or is there a "hidden trick", which I am missing?
Thanks a lot!
Kind regards,
M

I got an answer on another resource.
https://github.com/ansible/ansible/issues/43187
Basically you have to add the Hostnames as a list to the Task.
Example:
- name: Gather vmware host facts
vmware_host_facts:
hostname: "{{ item.esxi_hostname }}"
username: "{{ item.esxi_user }}"
password: "{{ item.esxi_pass }}"
validate_certs: no
register: host_facts
delegate_to: localhost
with_items:
- {esxi_hostname: hostname_1, esxi_user: username_host_1, esxi_pass: pass_host_1}
- {esxi_hostname: hostname_2, esxi_user: username_host_2, esxi_pass: pass_host_2}
These Hostnames you can gather with another module - vmware_vm_facts. Here you can get the Hostnames from. I will update this with an example playbook in the near future.

I use this module with these options, so far I have one issue if I put vcenter address it only give me first Esxi output.
- name: Somethign.
vmware_host_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_user }}'
password: '{{ vcenter_pass }}'
validate_certs: no
register: all_cluster_hosts_facts
delegate_to: localhost
- debug: var=all_cluster_hosts_facts

Related

Ansible - vmware: how to find a guest on a multiple hosts vcenter?

One of the many Ansible Community.VMWare modules parameters is 'hostname', which is the name of the ESXi server.
In my case, a guest could be in one of multiple ESXi servers (8, for now), and also a new server could be added by the support team at any time.
Is there a way to find on which ESXi server a guest is? Or is it mandatory that I know this at start?
I could have a list of the ESXi servers, keep updating it on demand, and loop over this list using module 'community.vmware.vmware_guest_find' and "with_items", but actually, I don't know how would I do this (iterate over the servers, changing the 'hostname', and stopping when I finally find the guest).
Any help?
I came up with this solution below. It's necessary to have previously the list of ESXi hosts.
[...]
vars:
vcenters_hostname:
- vcenter01
- vcenter02
- ...
[...]
- block:
- name: Navigate throughout all vcenters looking for the guest
community.vmware.vmware_guest_find:
hostname: "{{ item }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ guest_name }}"
validate_certs: no
delegate_to: localhost
register: guest_find_result
with_items: "{{ vcenter_hostnames }}"
rescue:
- name: Doing nothing only to don't raise a fail message
meta: noop
always:
- name: Record which vcenter and folder is the guest
ansible.builtin.set_fact:
guest_folder: "{{ item['folders'][0] }}"
vcenter_hostname: "{{ item['item'] }}"
with_items: "{{ guest_find_result['results'] }}"
when: item['failed'] == false

Using Ansible to manage ESXI Inventory

I am trying to use Ansible to modify the DNS settings on a group of ESXI servers. I've been able to get my playbook to change the settings on a single server like this:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: 'myesxiserver.domain.local'
username: 'username'
password: 'password'
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
How can I get this to work for multiple servers? The Ansible documentation provides this example:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
change_hostname_to: esx01
domainname: foo.org
dns_servers:
- 8.8.8.8
- 8.8.4.4
delegate_to: localhost
I'm not clear on how to iterate through a list of hosts and pass the correct values into the variable '{{ esxi_hostname }}' for each of my servers. I'm assuming that the variables can be passed using an inventory file but I haven't found any good examples on how to do this for ESXI servers.
So I did get this working.
---
- hosts: localhost
vars_files:
- vars.yml
- vars2.yml
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: "{{ item }}"
username: 'myadmin'
password: "{{ Password }}"
validate_certs: no
change_hostname_to: "{{ item }}"
domainname: foo.org
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
loop: "{{ esxihost }}"
I had to pass a list of host names using vars_file and iterate through it using the loop keyword. I tried to use the {{inventory_hostname}} variable along with a standard inventory file but because SSH is not generally enabled by default on ESXi servers I would get an SSH connection error.

Configure VyOS vm with ansible vmware_shell

I am trying to configure a VyOS vm that I"ve built from a template. The template is a fresh install without any configuration.
The vm doesn't have an IP configured, so I can't use the ssh options or the vyos ansible module. So I'm trying to use the vmware_vm_shell module, which will let me execute commands but I can't enter conf mode for VyOS.
I've tried bash and vbash for my shell. I've tried setting the conf commands to environment vars to execute, I've tried with_item but it doesn't seem that will work with vmware_vm_shell.
The bear minimum I need is to configure an IP address so that I can then ssh or use the vyos ansible module to complete the configuration.
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
---
- hosts: localhost
gather_facts: no
connection: local
vars:
vcenter_hostname: "192.168.1.100"
vcenter_username: "administrator#vsphere.local"
vcenter_password: "SekretPassword!"
datacenter: "Datacenter"
cluster: "Cluster"
vm_name: "router-01"
tasks:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_args: 'conf 2> myFile'
vm_shell_cwd: "/tmp"
delegate_to: localhost
register: shell_command_output
This throws the error:
/bin/vbash: conf: No such file or directory
I have an EdgeRouter, which uses a Vyatta-derived operating system. The issue you're having is caused by the fact that conf (or configure for me) isn't actually the name of a command. The cli features are implement through a complex collection of bash functions that aren't loaded when you log in non-interactively.
There is a wiki page on vyos.net that suggests a solution. By sourcing in /opt/vyatta/etc/functions/script-template, you prepare the shell environment such that vyos commands will work as expected.
That is, you need to execute a shell script (with vbash) that looks like this:
source /opt/vyatta/etc/functions/script-template
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
exit
I'm not familiar with the vmware_vm_shell module, so I don't know exactly how you would do that, but for example this works for me to run a single command:
ssh ubnt#router 'vbash -c "source /opt/vyatta/etc/functions/script-template
configure
show interfaces
"'
Note the newlines in the above. That suggests that this might work:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_cwd: "/tmp"
vm_shell_args: |-
-c "source /opt/vyatta/etc/functions/script-template
configure
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save"
delegate_to: localhost
register: shell_command_output

Ansible | Deploy VM then run additional playbooks against new host?

I am fairly new to Ansible. I have created an Ansible role that contains the following tasks that will deploy a vm from a template, then configure the VM with custom OS settings:
create-vm.yml
configure-vm.yml
I can successfully deploy a VM from a template via the "create-vm" task. But after that is
complete, I would like to continue with the "configure-vm" task. Since the playbooks/role-vm-deploy.yml file contains "localhost" as shown here...
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
... the next task doesn't run successfully because it is attempting to run the task against "localhost" and not the new VM hostname. I have since added the following to the end of the "create-vm" task...
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
...but I'm not quite sure what to do with it. I can't quite wrap my head around what else I need to do and how to call the new hostname in the "configure-vm" task instead of localhost.
I am executing the playbook via CLI
# ansible-playbook playbooks/role-vm-deploy.yml
I saw this post, which was kind of helpful
I also saw the dynamic inventory documentation, but it's a bit over my head at this juncture. Any help would be appreciated. Thank you!
Here are the contents for the playbooks and tasks
### playbooks -> role-vm-deploy.yml
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
### roles -> vm-deploy -> tasks -> main.yml
- name: Deploy VM
include: create-vm.yml
tags:
- create-vm
- name: Configure VM
include: configure-vm.yml
tags:
- configure-vm
### roles -> vm-deploy -> tasks -> create-vm.yml
- name: Clone the template
vmware_guest:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_pwd }}'
validate_certs: False
name: '{{ hostname }}'
template: '{{ template_name }}'
datacenter: '{{ datacenter }}'
folder: '/'
hardware:
memory_mb: '{{ memory }}'
num_cpus: '{{ num_cpu }}'
networks:
- label: "Network adapter 1"
state: present
connected: True
name: '{{ vlan }}'
state: poweredon
wait_for_ip_address: yes
### roles -> vm-deploy -> tasks -> configure-vm.yml
### This task is what I need to execute on the new hostname, but it attempts to execute on "localhost" ###
# Configure Networking
- name: Configure IP Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^IPADDR='
line: 'IPADDR={{ ip_address }}'
- name: Configure Gateway Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^GATEWAY='
line: 'GATEWAY={{ gw_address }}'
### roles -> vm-deploy -> defaults -> main.yml
- All of the variables reside here including "{{ hostname }}.{{ domain }}"
You got so close! The trick is to observe that a playbook is actually a list of plays (the yaml objects that are {"hosts": "...", "tasks": []}), and the targets of subsequent plays don't have to exist when the playbook starts -- presumably for this very reason. Thus:
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
# or wherever you were executing this -- it wasn't obvious from your question
post_tasks:
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
- hosts: just_created
tasks:
- debug:
msg: hello from the newly created {{ inventory_hostname }}

Ansible-Playbook - Save output to a remote server

I'm new to all the Ansible stuff. So most of the time I'm in "Trial and Error"-Mode.
Now I'm facing a challenge with a playbook and I do not know to look further.
The main task of this playbook should be to get a "Show run" from a Cisco Device and save this in a text file on a backup server (which is a remote server).
The only task, which is not working, is the Backup Task.
Here is my playbook:
- hosts: IOSGATEWAY
gather_facts: no
connection: local
tasks:
- name: GET CREDENTIALS
include_vars: path/to/all/all.yml
- name: DEFINE CONNECTION TO GW
set_fact:
connection:
host: "{{ inventory_hostname }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
- name: GET SHOW RUN
ios_command:
provider: "{{ connection }}"
commands:
- show run
register: show_run
- name: SAVE TO BACKUP SERVER
copy:
content: "{{ show_run.stdout[0] }}"
dest: "path/to/Directory/{{ inventory_hostname }}.txt"
delegate_to: BACKUPSERVER
Can someone hint me in the right direction?
You set connection: local for the playbook, so everything you do is executed locally (which is correct for ios_... modules, but not what you actually want for copy module).
I'd recommend to define ansible_connection variable in your inventory per group of hosts/devices, so Ansible will use local connection for your ios devices, and ssh for backup-server.

Resources