Ansible ec2_remote_facts - amazon-ec2

I am using ec2_remote_facts to find the ec2 facts of an instance based on the tags.
- name: Search ec2
ec2_remote_facts:
filters:
"tag:Name": "{{hostname}}"
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{aws_region}}"
register: ec2_info
Now I want to get the instance id of the instance id of that particular host and store it in a variable to use it in my playbook.
Can someone please help in finding or extracting the instance id.
Thanks,

You should use your registered variable 'ec2_info':
- debug: var=ec2_info
- debug: var=item
with_items: ec2_info.instance_ids
# register new variable
- set_fact:
instance_ids: ec2_info.instance_ids
- add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: wait for instances to listen on port:22
wait_for:
state=started
host={{ item.public_dns_name }}
port=22
with_items: ec2_info.instances
# Connect to the node and gather facts,
# including the instance-id. These facts
# are added to inventory hostvars for the
# duration of the playbook's execution
- hosts: ec2hosts
gather_facts: True
user: ec2-user
sudo: True
tasks:
# fetch instance data from the metadata servers in ec2
- ec2_facts:
# show all known facts for this host
- debug: var=hostvars[inventory_hostname]
# just show the instance-id
- debug: msg="{{ hostvars[inventory_hostname]['ansible_ec2_instance-id'] }}"

Related

How to switch Ansible playbook to another host when calling second playbook

I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.

Ansible Handle Multiple Elements in Hash

I'm trying to load a list of hosts in my playbook to provision a KVM. I need to key this off of the hosts.yml because another playbook is going to take the hosts and connect to them once they're up.
This my hosts.yml:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
dcos-agent-1
dcos-agent-2
vars:
lv_size: "50g"
So on a single KVM I run this playbook to create the logical volumes for each of the Virtual Machines corresponding to the dcos hosts:
---
- hosts: kvm
tasks:
- name: Get a list of vm's to create
include_vars:
file: "../hosts.yml"
- name: Verify the host list
debug: var=dcos
when: dcos is defined
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.value.hosts }}"
size: "{{ item.value.vars.lv_size }}"
with_dict: "{{ dcos }}"
This works fine until you include more than one host for a group. I've tried other loops but I'm not sure how to continue. How can I iterate over the hash while working on each host in a group?
I'm new to Ansible so I really wasn't sure how to accomplish what I wanted. I initially tried to parse the hosts file myself, not knowing that Ansible does this for me. Now I know...
All of the host and group data is stored in host_vars and groups. All I needed to do what use this like so:
vars:
dcoshosts: "{{ groups['dcos'] }}"
tasks:
- name: List groups
debug:
msg: "{{ groups }}"
- name: Get All DCOS hosts
debug:
msg: "Host: {{ item }}"
with_items:
- "{{ dcoshosts }}"
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item }}"
size: "{{ hostvars[item].lv_size }}"
with_items: "{{ dcoshosts }}"
I ended up using a hosts.ini file instead of yaml because the ini worked and the yaml didn't. Here it is to complete the picture:
[dcos-masters]
dcos-master-1
dcos-master-2
[dcos-masters:vars]
lv_size="50g"
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos-agents:vars]
lv_size="50g"
[dcos-bootstraps]
dcos-bootstrap
[dcos-bootstraps:vars]
lv_size="10g"
[dcos:children]
dcos-masters
dcos-agents
dcos-bootstraps
Thanks to everybody for the help and pushing me to my solution :)
You try to reinvent what Ansible provides already.
This is the Ansbile way of life:
Define your hosts and groups in an inventory file dcos.ini:
[dcos-bootstraps]
dcos-bootstrap
[dcos-masters]
dcos-master-1
[dcos-agents]
dcos-agent-1
dcos-agent-2
[dcos:children]
dcos-bootstraps
dcos-masters
dcos-agents
Then you customize the group parameters in group variables.
Bootstrap parameters in groups_vars/dcos-bootstraps.yml:
---
lv_size: 10g
Masters parameters in group_vars/dcos-masters.yml:
---
lv_size: 50g
Agents parameters in group_vars/dcos-agents.yml:
---
lv_size: 50g
And your playbooks becomes quite simple:
---
- hosts: dcos
tasks:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ inventory_hostname }}"
size: "{{ lv_size }}"
When you run this each host gets its lv_size parameter based on the group membership of the host.
First of all, you I suspect you have incorrect data.
If you want your hosts to be a list, you should make it so:
kvm:
hosts:
kvm01
dcos:
dcos-bootstrap:
hosts:
- dcos-bootstrap
vars:
lv_size: "10g"
dcos-masters:
hosts:
- dcos-master-1
vars:
lv_size: "50g"
dcos-agents:
hosts:
- dcos-agent-1
- dcos-agent-2
vars:
lv_size: "50g"
Note the hyphens.
And to your question: if you are not going to use "group" names in your loop (like dcos-bootstrap, etc.), you can use with_subelements:
- name: Provision Volume Groups
lvol:
vg: vg01
lv: "{{ item.1 }}"
size: "{{ item.0.vars.lv_size }}"
with_subelements:
- "{{ dcos }}"
- hosts

Ansible wait for initializing host before doing tasks in playbook

In my host, it needs time (about 20s) to initialize CLI session,... before doing cli
I'm trying to do command by playbook ansible:
---
- name: Run show sub command
hosts: em
gather_facts: no
remote_user: duypn
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for host=em port=22 delay=60 state=started
- name: run show sub command
raw: show sub id=xxxxx;display=term-type
After 10 mins, ansible gives me output which is not the result of show sub command :(
...
["CLI Session initializing..", "Autocompleter initializing..", "CLI>This session has been IDLE for too long.",
...
I'm glad to hear your suggestion. Thank you :)
I don't have a copy-paste solution for you but one thing I learned is to put a sleep after ssh is 'up' to allow the machine to finish it's work. This might give you a nudge in the right direction.
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: "{{ ec2.instances }}"
- name: waiting for a few seconds to let the machine start
pause:
seconds: 20
So I had the same problem and this is how I solved it:
---
- name: "Get instances info"
ec2_instance_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
filters:
vpc-id : "{{ vpc_id }}"
private-ip-address: "{{ ansible_ssh_host }}"
delegate_to: localhost
register: my_ec2
- name: "Waiting for {{ hostname }} to response"
wait_for:
host: "{{ item.public_ip_address }}"
state: "{{ state }}"
sleep: 1
port: 22
delegate_to: localhost
with_items:
- "{{ my_ec2.instances }}"
That is the playbook named aws_ec2_status.
The playbook I ran looks like this:
---
# Create an ec2 instance in aws
- hosts: nodes
gather_facts: false
serial: 1
vars:
state: "present"
roles:
- aws_create_ec2
- hosts: nodes
gather_facts: no
vars:
state: "started"
roles:
- aws_ec2_status
The reason I split the create and check to two different playbooks is because I want the playbook to create instances and not wait for one to be ready before creating the other.
But if the second instance is depended on the first one so you should combine them.
FYI Let me know if you want to see my aws_create_ec2 playbook.

getting "skipping: no hosts matched" with ansible playbook

my below ansible (2.1.1.0) playbook is throwing "skipping: no hosts matched" error when playing for host [ec2-test]. The ansible hosts file has the fqdn of the newly created instance added. It runs fine if if i re run my playbook second time. but 1st time running throws no hosts matched error :(
my playbook:
---
- name: Provision an EC2 instance
hosts: localhost
connection: local
gather_facts: no
become: False
vars_files:
- awsdetails.yml
tasks:
- name: Launch the new EC2 Instance
ec2:
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
group_id: "{{ security_group_id }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
key_name: "{{ ssh_keyname }}"
wait: yes
region: "{{ region }}"
count: 1
register: ec2
- name: Update the ansible hosts file with new IP address
local_action: lineinfile dest="/etc/ansible/hosts" regexp={{ item.dns_name }} insertafter='\[ec2-test\]'line="{{item.dns_name}} ansible_ssh_private_key_file=/etc/ansible/e2-key-file.pem ansible_user=ec2-user"
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- name: playing ec2-test instances
hosts: ec2-test
gather_facts: no
my hosts file has these inventories
[localhost]
localhost
....
[ec2-test]
ec2-54-244-180-186.us-west-2.compute.amazonaws.com
Any idea why i am getting the skipping: no hosts matched error if showing up here? any help would be greatly appreciated
Thanks!
It seems like Ansible reads the inventory file just once and does not refresh it after you add an entry into it during the playbook execution. Hence, it cannot add the newly added host.
To fix this you may force Ansible to refresh the entire inventory with the below task executed after you update the inventory file:
- name: Refresh inventory to ensure new instaces exist in inventory
meta: refresh_inventory
or you may not update the inventory file at all and use the add_host task instead:
- add_host:
name: "{{ item.dns_name }}"
groups: ec2-test
with_items: ec2.instances

Ansible: How to stop an EC2 instance started by a different playbook

If I start an EC2 instance (or set of instances) in Ansible, how can I later refer to that instance (or set) and terminate them?
This is across playbooks. So, in one playbook I've already started an instance, then later, I want to terminate those instances with another playbook.
Thanks.
You will need to do something like this (working example):
terminate.yml:
- name: terminate single instance
hosts: all
tasks:
- action: ec2_facts
- name: terminating single instance
local_action:
module: ec2
state: 'absent'
region: us-east-1
instance_ids: "{{ ansible_ec2_instance_id }}"
to terminate instance at address instance.example.com:
$ ansible-playbook -i instance.example.com, terminate.yml
The ec2 facts module will query the metadata service on the instance to get the instance ID.
The ec2 module is used to terminate the instance by its ID.
Note the ec2_facts module needs to run on the instance(s) that you want to terminate and you will probably want to use an inventory file or dynamic inventory to lookup the instance(s) by tag instead of addressing them by hostname.
Well, if we are going to use the dynamic inventory, then I recommend using count_tags and exact_count with the ec2 module when creating the instances with create.yml:
---
- hosts: localhost
connection: local
gather_facts: false
vars_files: { ./env.yml }
tasks:
- name: Provision a set of instances
ec2:
instance_type: "{{ item.value.instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ item.value.vpc_subnet_id }}"
tenancy: "{{ tenancy }}"
group_id: "{{ group_id }}"
key_name: "{{ key_name }}"
wait: true
instance_tags:
Name: "{{ env_id }}"
Type: "{{ item.key }}"
count_tag:
Type: "{{ item.key }}"
exact_count: "{{ item.value.count }}"
with_dict: "{{ servers }}"
register: ec2
The env.yml file has all those variables, and the servers dictionary:
---
env_id: JaxDemo
key_name: JaxMagicKeyPair
image: "ami-xxxxxxxx"
region: us-east-1
group_id: "sg-xxxxxxxx,sg-yyyyyyyy,sg-zzzzzzzz"
tenancy: dedicated
servers:
app:
count: 2
vpc_subnet_id: subnet-xxxxxxxx
instance_type: m3.medium
httpd:
count: 1
vpc_subnet_id: subnet-yyyyyyyy
instance_type: m3.medium
oracle:
count: 1
vpc_subnet_id: subnet-zzzzzzzz
instance_type: m4.4xlarge
Now, if you want to change the number of servers, just change the count in the servers dictionary. If you want to delete all of them, we all the counts to 0.
Or, if you prefer, copy the create.yml file to delete_all.yml, and replace
exact_count: "{{ item.value.count }}"
with
exact_count: 0
I'll extend the answer of jarv with a working example, in which you can terminate single or a group of server(s) with ansible playbook.
Let's suppose, you want to terminate a instance(s), that fall under the delete group in your hosts file:
[delete]
X.X.X.X # IP address of an EC2 instance
Now your playbook will be look like this(in my case, I named it "ec2_terminate.yml"):
---
- hosts: delete
gather_facts: True
user: ubuntu
sudo: True
tasks:
# fetch instance data from the metadata servers in ec2
- ec2_facts:
# just show the instance-id
- debug: msg= "{{ hostvars[inventory_hostname]['ansible_ec2_instance_id'] }}"
- hosts: delete
gather_facts: True
connection: local
vars:
region: "us-east-1"
tasks:
- name: destroy all instances
ec2: state='absent'
region={{ region }}
instance_ids={{ item }}
wait=true
with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id']
Now run this playbook like this:
ansible-playbook -i hosts ec2_terminate.yml
When you originally create those instances, make sure to uniquely identify/tag them.
Then on the following runs you can use dynamic inventory script to populate your host list, as seen here http://docs.ansible.com/intro_dynamic_inventory.html and then terminate instances matching your tags.

Resources