I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.
Related
I want to write hostname and ssh_host under a specific group from an inventory file to a file in my localhost.
I have tried this:
my_role
set_fact:
hosts: "{{ hosts_list|default({}) | combine( {inventory_hostname: ansible_ssh_host} ) }}"
when: "'my_group' in group_names"
Playbook
become: no
gather_facts: no
roles:
- my_role
But now if i try to write this to a file in another task, it will create a file for the hosts in inventory file.
Can you please suggest if there is a better way to create a file containing dictionary with inventory_hostname and ansible_ssh_host as key and value respectively, by running playbook on localhost.
An option would be to use lineinfile and delegate_to localhost.
- hosts: test_01
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- tempfile:
delegate_to: localhost
register: result
- lineinfile:
path: "{{ result.path }}"
line: "inventory_hostname: {{ my_inventory_hostname }}"
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}
stored in {{ result.path }}"
Note: There is no need to quote the conditions. Conditions are expanded by default.
when: my_group in group_names
Thanks a lot for your answer#Vladimir Botka
I came up with another requirement to write the hosts belonging to a certain group to a file, below code can help with that (adding to #Vladimir's solution).
- hosts: all
become: no
gather_facts: no
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- lineinfile:
path: temp/hosts
line: "inventory_hostname: {{ my_inventory_hostname }}"
when: inventory_hostname in groups['my_group']
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}"
could you please help me to add NTP address as variable , this is vmware ntp server update and i am trying to pass NTP server address as variable.
var file content
cat var.yml
NTP_Servers:
- 192.168.10.20
- 192.168.10.21
Playbook file
- name: Add NTP to host on specified portgroup
local_action:
module: vmware_host_ntp
hostname: "{{ vcenter_ip}}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
esxi_hostname: "{{ item.value.mgmtip }}"
ntp_servers: <<<<<<<<<IP ADDRESS>
state: present
with_dict: "{{ PayloadNodes | default({}) }}"
ignore_errors: yes
tags: NTP
If you want to get the contents of the file as a variable, you can either: use include_vars on the task level
- name: Add NTP to host on specified portgroup
include_vars: var.yaml
or use vars_file on the playbook level
---
hosts: netservers
vars_files:
- var.yml
I wrote an Ansible Playbook that creates multiple VMs. The Playbook is split in two files. Main.yaml and vars.yaml. Its creating the VMs and it seems to work nicely. Im not getting any errors so i assume it successfully added the created hosts to the Inventory. I want to check if the created hosts were added to the Inventory. How can i print/list the Hosts of the inventory? My goal is to run scripts at a later stage on the created VMs. Thanks.
**Main.yaml**
#########CREATING VM#########
---
- hosts: localhost
vars:
http_port: 80
max_clients: 200
vars_files:
- vars.yaml
tasks:
- name: create VM
os_server:
name: "{{ item.name }}"
state: present
image: "{{ item.image }}"
boot_from_volume: True
security_groups: ssh
flavor: "{{ item.flavor }}"
key_name: mykey
region_name: "{{ lookup('env', 'OS_REGION_NAME') }}"
nics:
- net-name: private
wait: yes
register: instances
with_items: "{{ instance_definitions }}"
############################################
- name: whait 15 seconds
pause: seconds=15
when: instances.changed
######DEBUG#################################
- name: display results
debug:
msg: "{{ item }}"
with_items: "{{ instances.results }}"
############################################
- name: Add new VM to ansible Inventory
add_host:
name: "{{ item.server.name}}"
ansible_host: "{{item.server.public_v4}}"
ansible_user: "{{ansible_user}}"
ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
groups: just_created
with_items: "{{ instances.results }}"
**vars.yaml**
---
instance_definitions:
- { name: Debian Jessie, image: Debian Jessie 8, flavor: c1.small, loginame: debian }
- { name: Debian Stretch, image: Debian Stretch 9, flavor: c1.small, loginame: debian }
This is what magic variables are for.
All your hosts will be in a list:
groups['just_created']
The example below illustrates how to create in-memory hosts and list them. The magic sauce is that add_hosts needs to be in a separate play.
---
- name: adding host playbook
hosts: localhost
connection: local
tasks:
- name: add host to in-memory inventory
add_host:
name: awesome_host_name
groups: in_memory
- name: checking hosts
hosts: in_memory
connection: local
gather_facts: false
tasks:
- debug: var=group_names
- debug: msg="{{ inventory_hostname }}"
- debug: var=hostvars[inventory_hostname]
You can print the the content of the inventory file using from the playbook using:
- debug: msg="the hosts are is {{lookup('file', '/etc/ansible/hosts') }}"
Alternatively, you can list the hosts from command line using:
ansible --list-hosts all
or use this command from the playbook:
tasks:
- name: list hosts
command: ansible --list-hosts all
register: hosts
- debug:
msg: "{{hosts.stdout_lines}}"
I'm trying to load a list of hosts in my playbook to provision a KVM. I need to key this off of the hosts.yml because another playbook is going to take the hosts and connect to them once they're up.
This my hosts.yml:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
dcos-agent-1
dcos-agent-2
vars:
lv_size: "50g"
So on a single KVM I run this playbook to create the logical volumes for each of the Virtual Machines corresponding to the dcos hosts:
---
- hosts: kvm
tasks:
- name: Get a list of vm's to create
include_vars:
file: "../hosts.yml"
- name: Verify the host list
debug: var=dcos
when: dcos is defined
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.value.hosts }}"
size: "{{ item.value.vars.lv_size }}"
with_dict: "{{ dcos }}"
This works fine until you include more than one host for a group. I've tried other loops but I'm not sure how to continue. How can I iterate over the hash while working on each host in a group?
I'm new to Ansible so I really wasn't sure how to accomplish what I wanted. I initially tried to parse the hosts file myself, not knowing that Ansible does this for me. Now I know...
All of the host and group data is stored in host_vars and groups. All I needed to do what use this like so:
vars:
dcoshosts: "{{ groups['dcos'] }}"
tasks:
- name: List groups
debug:
msg: "{{ groups }}"
- name: Get All DCOS hosts
debug:
msg: "Host: {{ item }}"
with_items:
- "{{ dcoshosts }}"
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item }}"
size: "{{ hostvars[item].lv_size }}"
with_items: "{{ dcoshosts }}"
I ended up using a hosts.ini file instead of yaml because the ini worked and the yaml didn't. Here it is to complete the picture:
[dcos-masters]
dcos-master-1
dcos-master-2
[dcos-masters:vars]
lv_size="50g"
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos-agents:vars]
lv_size="50g"
[dcos-bootstraps]
dcos-bootstrap
[dcos-bootstraps:vars]
lv_size="10g"
[dcos:children]
dcos-masters
dcos-agents
dcos-bootstraps
Thanks to everybody for the help and pushing me to my solution :)
You try to reinvent what Ansible provides already.
This is the Ansbile way of life:
Define your hosts and groups in an inventory file dcos.ini:
[dcos-bootstraps]
dcos-bootstrap
[dcos-masters]
dcos-master-1
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos:children]
dcos-bootstraps
dcos-masters
dcos-agents
Then you customize the group parameters in group variables.
Bootstrap parameters in groups_vars/dcos-bootstraps.yml:
---
lv_size: 10g
Masters parameters in group_vars/dcos-masters.yml:
---
lv_size: 50g
Agents parameters in group_vars/dcos-agents.yml:
---
lv_size: 50g
And your playbooks becomes quite simple:
---
- hosts: dcos
tasks:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ inventory_hostname }}"
size: "{{ lv_size }}"
When you run this each host gets its lv_size parameter based on the group membership of the host.
First of all, you I suspect you have incorrect data.
If you want your hosts to be a list, you should make it so:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
- dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
- dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
- dcos-agent-1
- dcos-agent-2
vars:
lv_size: "50g"
Note the hyphens.
And to your question: if you are not going to use "group" names in your loop (like dcos-bootstrap, etc.), you can use with_subelements:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.1 }}"
size: "{{ item.0.vars.lv_size }}"
with_subelements:
- "{{ dcos }}"
- hosts
my below ansible (2.1.1.0) playbook is throwing "skipping: no hosts matched" error when playing for host [ec2-test]. The ansible hosts file has the fqdn of the newly created instance added. It runs fine if if i re run my playbook second time. but 1st time running throws no hosts matched error :(
my playbook:
---
- name: Provision an EC2 instance
hosts: localhost
connection: local
gather_facts: no
become: False
vars_files:
- awsdetails.yml
tasks:
- name: Launch the new EC2 Instance
ec2:
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
group_id: "{{ security_group_id }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
key_name: "{{ ssh_keyname }}"
wait: yes
region: "{{ region }}"
count: 1
register: ec2
- name: Update the ansible hosts file with new IP address
local_action: lineinfile dest="/etc/ansible/hosts" regexp={{ item.dns_name }} insertafter='\[ec2-test\]'line="{{item.dns_name}} ansible_ssh_private_key_file=/etc/ansible/e2-key-file.pem ansible_user=ec2-user"
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- name: playing ec2-test instances
hosts: ec2-test
gather_facts: no
my hosts file has these inventories
[localhost]
localhost
....
[ec2-test]
ec2-54-244-180-186.us-west-2.compute.amazonaws.com
Any idea why i am getting the skipping: no hosts matched error if showing up here? any help would be greatly appreciated
Thanks!
It seems like Ansible reads the inventory file just once and does not refresh it after you add an entry into it during the playbook execution. Hence, it cannot add the newly added host.
To fix this you may force Ansible to refresh the entire inventory with the below task executed after you update the inventory file:
- name: Refresh inventory to ensure new instaces exist in inventory
meta: refresh_inventory
or you may not update the inventory file at all and use the add_host task instead:
- add_host:
name: "{{ item.dns_name }}"
groups: ec2-test
with_items: ec2.instances