Ansible Handle Multiple Elements in Hash - ansible

I'm trying to load a list of hosts in my playbook to provision a KVM. I need to key this off of the hosts.yml because another playbook is going to take the hosts and connect to them once they're up.
This my hosts.yml:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
dcos-agent-1
dcos-agent-2
vars:
lv_size: "50g"
So on a single KVM I run this playbook to create the logical volumes for each of the Virtual Machines corresponding to the dcos hosts:
---
- hosts: kvm
tasks:
- name: Get a list of vm's to create
include_vars:
file: "../hosts.yml"
- name: Verify the host list
debug: var=dcos
when: dcos is defined
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.value.hosts }}"
size: "{{ item.value.vars.lv_size }}"
with_dict: "{{ dcos }}"
This works fine until you include more than one host for a group. I've tried other loops but I'm not sure how to continue. How can I iterate over the hash while working on each host in a group?

I'm new to Ansible so I really wasn't sure how to accomplish what I wanted. I initially tried to parse the hosts file myself, not knowing that Ansible does this for me. Now I know...
All of the host and group data is stored in host_vars and groups. All I needed to do what use this like so:
vars:
dcoshosts: "{{ groups['dcos'] }}"
tasks:
- name: List groups
debug:
msg: "{{ groups }}"
- name: Get All DCOS hosts
debug:
msg: "Host: {{ item }}"
with_items:
- "{{ dcoshosts }}"
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item }}"
size: "{{ hostvars[item].lv_size }}"
with_items: "{{ dcoshosts }}"
I ended up using a hosts.ini file instead of yaml because the ini worked and the yaml didn't. Here it is to complete the picture:
[dcos-masters]
dcos-master-1
dcos-master-2
[dcos-masters:vars]
lv_size="50g"
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos-agents:vars]
lv_size="50g"
[dcos-bootstraps]
dcos-bootstrap
[dcos-bootstraps:vars]
lv_size="10g"
[dcos:children]
dcos-masters
dcos-agents
dcos-bootstraps
Thanks to everybody for the help and pushing me to my solution :)

You try to reinvent what Ansible provides already.
This is the Ansbile way of life:
Define your hosts and groups in an inventory file dcos.ini:
[dcos-bootstraps]
dcos-bootstrap
[dcos-masters]
dcos-master-1
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos:children]
dcos-bootstraps
dcos-masters
dcos-agents
Then you customize the group parameters in group variables.
Bootstrap parameters in groups_vars/dcos-bootstraps.yml:
---
lv_size: 10g
Masters parameters in group_vars/dcos-masters.yml:
---
lv_size: 50g
Agents parameters in group_vars/dcos-agents.yml:
---
lv_size: 50g
And your playbooks becomes quite simple:
---
- hosts: dcos
tasks:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ inventory_hostname }}"
size: "{{ lv_size }}"
When you run this each host gets its lv_size parameter based on the group membership of the host.

First of all, you I suspect you have incorrect data.
If you want your hosts to be a list, you should make it so:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
- dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
- dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
- dcos-agent-1
- dcos-agent-2
vars:
lv_size: "50g"
Note the hyphens.
And to your question: if you are not going to use "group" names in your loop (like dcos-bootstrap, etc.), you can use with_subelements:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.1 }}"
size: "{{ item.0.vars.lv_size }}"
with_subelements:
- "{{ dcos }}"
- hosts

Related

Ansible print created hosts

I wrote an Ansible Playbook that creates multiple VMs. The Playbook is split in two files. Main.yaml and vars.yaml. Its creating the VMs and it seems to work nicely. Im not getting any errors so i assume it successfully added the created hosts to the Inventory. I want to check if the created hosts were added to the Inventory. How can i print/list the Hosts of the inventory? My goal is to run scripts at a later stage on the created VMs. Thanks.
**Main.yaml**
#########CREATING VM#########
---
- hosts: localhost
vars:
http_port: 80
max_clients: 200
vars_files:
- vars.yaml
tasks:
- name: create VM
os_server:
name: "{{ item.name }}"
state: present
image: "{{ item.image }}"
boot_from_volume: True
security_groups: ssh
flavor: "{{ item.flavor }}"
key_name: mykey
region_name: "{{ lookup('env', 'OS_REGION_NAME') }}"
nics:
- net-name: private
wait: yes
register: instances
with_items: "{{ instance_definitions }}"
############################################
- name: whait 15 seconds
pause: seconds=15
when: instances.changed
######DEBUG#################################
- name: display results
debug:
msg: "{{ item }}"
with_items: "{{ instances.results }}"
############################################
- name: Add new VM to ansible Inventory
add_host:
name: "{{ item.server.name}}"
ansible_host: "{{item.server.public_v4}}"
ansible_user: "{{ansible_user}}"
ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
groups: just_created
with_items: "{{ instances.results }}"
**vars.yaml**
---
instance_definitions:
- { name: Debian Jessie, image: Debian Jessie 8, flavor: c1.small, loginame: debian }
- { name: Debian Stretch, image: Debian Stretch 9, flavor: c1.small, loginame: debian }
This is what magic variables are for.
All your hosts will be in a list:
groups['just_created']
The example below illustrates how to create in-memory hosts and list them. The magic sauce is that add_hosts needs to be in a separate play.
---
- name: adding host playbook
hosts: localhost
connection: local
tasks:
- name: add host to in-memory inventory
add_host:
name: awesome_host_name
groups: in_memory
- name: checking hosts
hosts: in_memory
connection: local
gather_facts: false
tasks:
- debug: var=group_names
- debug: msg="{{ inventory_hostname }}"
- debug: var=hostvars[inventory_hostname]
You can print the the content of the inventory file using from the playbook using:
- debug: msg="the hosts are is {{lookup('file', '/etc/ansible/hosts') }}"
Alternatively, you can list the hosts from command line using:
ansible --list-hosts all
or use this command from the playbook:
tasks:
- name: list hosts
command: ansible --list-hosts all
register: hosts
- debug:
msg: "{{hosts.stdout_lines}}"

How to switch Ansible playbook to another host when calling second playbook

I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.

Ansible - Run Task Against Hosts in a with_together Fashion

I am currently taking two hosts and dynamically adding them to a group, followed by a synchronize task using with_together to use 3 lists of 2 elements in parallel to copy the specified files between two remote servers.
Here's an example based on the idea:
---
- name: Configure Hosts for Copying
hosts: localhost
gather_facts: no
tasks:
- name: Adding given hosts to new group...
add_host:
name: "{{ item }}"
groups: copy_group
with_items:
- ["remoteDest1", "remoteDest2"]
- name: Copy Files between servers
hosts: copy_group
gather_facts: no
tasks:
- name: Copying files...
synchronize:
src: "{{ item[1] }}"
dest: "{{ item[2] }}"
with_together:
- ["remoteSrc1", "remoteSrc2"]
- ["/tmp/remote/source/one/", "/tmp/remote/source/two/"]
- ["/tmp/remote/dest/one/", "/tmp/remote/dest/two/"]
delegate_to: "{{ item[0] }}"
Currently, it does both operations for both servers, resulting in 4 operations.
I need it to synchronize like so:
copy /tmp/remote/source/one/ from remoteSrc1 to /tmp/remote/dest/one/ on remoteDest1
copy /tmp/remote/source/two/ from remoteSrc2 to /tmp/remote/dest/two/ on remoteDest2
Which would mean it's a 1:1 ratio; essentially acting on the hosts in the same manner as with_together does for the lists.
The hosts are obtained dynamically, so I can't just make a different play for each host.
Since synchronize is essentially simplified version of rsync, then if there's a simple solution for this using rsync directly, then it would be much appreciated.
There isn't native functionality for this, so this is how I solved it:
Given the original task, add the following two lines:
- "{{ groups['copy_group'] }}"
when: inventory_hostname == item[3]
To get:
- name: Copying files...
synchronize:
src: "{{ item[1] }}"
dest: "{{ item[2] }}"
with_together:
- ["remoteSrc1", "remoteSrc2"]
- ["/tmp/remote/source/one/", "/tmp/remote/source/two/"]
- ["/tmp/remote/dest/one/", "/tmp/remote/dest/two/"]
- "{{ groups['copy_group'] }}"
delegate_to: "{{ item[0] }}"
when: inventory_hostname == item[3]
Essentially, by adding the hosts as a list, they can be used in the when statement to execute the task only when the current host (inventory_hostname) matches the host currently being indexed in the list.
The result is that the play only runs against each host once in a serial manner with the other list items that have the same index.

lineinfile ansible module skips a line

I have a need to know the index of host names in the inventory. I am using the below code to create a variable file that I can use in a subsequent play book
- name: Debug me
hosts: hosts
tasks:
- debug: msg="{{ inventory_hostname }}"
- debug: msg="{{ play_hosts.index(inventory_hostname) }}"
- local_action: 'lineinfile create=yes dest=/tmp/test.conf
line="host{{ play_hosts.index(inventory_hostname) }}=
{{ inventory_hostname }}"'
I have the following inventory file
[hosts]
my.host1.com
my.host2.com
Now when I run this, the test.conf that gets generated under /tmp sometimes has both hostnames like this
host1= my.host2.com
host0= my.host1.com
when I run the same playbook a few times each time emptying the test.conf before running. quite a few times the file only has one entry
host1= my.host2.com
or
host0= my.host1.com
how come the same ansible playbook behaving differently?
I believe the issue is your running two threads against different hosts, and using local_action is not thread safe.
Try using the serial keyword:
- name: Debug me
hosts: hosts
serial: 1
tasks:
- debug: msg="{{ inventory_hostname }}"
- debug: msg="{{ play_hosts.index(inventory_hostname) }}"
- local_action: 'lineinfile create=yes dest=/tmp/test.conf
line="host{{ play_hosts.index(inventory_hostname) }}=
{{ inventory_hostname }}"'
Edit: A better way to do this if just trying to operate on the list of host in inventory on the localhost, would be to avoid doing the action on the host and using local_action in the first place.
- name: Debug me
hosts: localhost
tasks:
- lineinfile:
create: yes
dest: /tmp/test.conf
line: "host{{ groups['hosts'].index(item)}}={{ item }}"
with_items: " {{ groups['hosts'] }}"
This will get you the results you desire. Then you can add another play to do operations against the hosts themselves.
The solution I am trying to avoid problems with race conditions with non-thread safe Local_action: lineinfile to write gathered data to local file. Split it across 2 different plays in the same file.
eg:
- name: gather_date
hosts: all
any_errors_fatal: false
gather_facts: no
tasks:
- name: get_Aptus_device_count_list
shell: gather_data.sh
become: true
register: Aptus_device_count_list
changed_when: false
- name: Log_gathered_date
hosts: all
any_errors_fatal: false
gather_facts: no
tasks:
- name: log_gathered_info
local_action:
module: lineinfile
dest: /home/rms-mit/MyAnsible/record_Device_count_collection.out
line: "\n--- {{ inventory_hostname }} --- \n
{{ Aptus_device_count_list.stdout }} \n.\n---\n"
changed_when: false

getting "skipping: no hosts matched" with ansible playbook

my below ansible (2.1.1.0) playbook is throwing "skipping: no hosts matched" error when playing for host [ec2-test]. The ansible hosts file has the fqdn of the newly created instance added. It runs fine if if i re run my playbook second time. but 1st time running throws no hosts matched error :(
my playbook:
---
- name: Provision an EC2 instance
hosts: localhost
connection: local
gather_facts: no
become: False
vars_files:
- awsdetails.yml
tasks:
- name: Launch the new EC2 Instance
ec2:
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
group_id: "{{ security_group_id }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
key_name: "{{ ssh_keyname }}"
wait: yes
region: "{{ region }}"
count: 1
register: ec2
- name: Update the ansible hosts file with new IP address
local_action: lineinfile dest="/etc/ansible/hosts" regexp={{ item.dns_name }} insertafter='\[ec2-test\]'line="{{item.dns_name}} ansible_ssh_private_key_file=/etc/ansible/e2-key-file.pem ansible_user=ec2-user"
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- name: playing ec2-test instances
hosts: ec2-test
gather_facts: no
my hosts file has these inventories
[localhost]
localhost
....
[ec2-test]
ec2-54-244-180-186.us-west-2.compute.amazonaws.com
Any idea why i am getting the skipping: no hosts matched error if showing up here? any help would be greatly appreciated
Thanks!
It seems like Ansible reads the inventory file just once and does not refresh it after you add an entry into it during the playbook execution. Hence, it cannot add the newly added host.
To fix this you may force Ansible to refresh the entire inventory with the below task executed after you update the inventory file:
- name: Refresh inventory to ensure new instaces exist in inventory
meta: refresh_inventory
or you may not update the inventory file at all and use the add_host task instead:
- add_host:
name: "{{ item.dns_name }}"
groups: ec2-test
with_items: ec2.instances

Resources