getting "skipping: no hosts matched" with ansible playbook - amazon-ec2

my below ansible (2.1.1.0) playbook is throwing "skipping: no hosts matched" error when playing for host [ec2-test]. The ansible hosts file has the fqdn of the newly created instance added. It runs fine if if i re run my playbook second time. but 1st time running throws no hosts matched error :(
my playbook:
---
- name: Provision an EC2 instance
hosts: localhost
connection: local
gather_facts: no
become: False
vars_files:
- awsdetails.yml
tasks:
- name: Launch the new EC2 Instance
ec2:
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
group_id: "{{ security_group_id }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
key_name: "{{ ssh_keyname }}"
wait: yes
region: "{{ region }}"
count: 1
register: ec2
- name: Update the ansible hosts file with new IP address
local_action: lineinfile dest="/etc/ansible/hosts" regexp={{ item.dns_name }} insertafter='\[ec2-test\]'line="{{item.dns_name}} ansible_ssh_private_key_file=/etc/ansible/e2-key-file.pem ansible_user=ec2-user"
with_items: ec2.instances
- name: Wait for SSH to come up
wait_for: host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.instances
- name: playing ec2-test instances
hosts: ec2-test
gather_facts: no
my hosts file has these inventories
[localhost]
localhost
....
[ec2-test]
ec2-54-244-180-186.us-west-2.compute.amazonaws.com
Any idea why i am getting the skipping: no hosts matched error if showing up here? any help would be greatly appreciated
Thanks!

It seems like Ansible reads the inventory file just once and does not refresh it after you add an entry into it during the playbook execution. Hence, it cannot add the newly added host.
To fix this you may force Ansible to refresh the entire inventory with the below task executed after you update the inventory file:
- name: Refresh inventory to ensure new instaces exist in inventory
meta: refresh_inventory
or you may not update the inventory file at all and use the add_host task instead:
- add_host:
name: "{{ item.dns_name }}"
groups: ec2-test
with_items: ec2.instances

Related

Ansible print created hosts

I wrote an Ansible Playbook that creates multiple VMs. The Playbook is split in two files. Main.yaml and vars.yaml. Its creating the VMs and it seems to work nicely. Im not getting any errors so i assume it successfully added the created hosts to the Inventory. I want to check if the created hosts were added to the Inventory. How can i print/list the Hosts of the inventory? My goal is to run scripts at a later stage on the created VMs. Thanks.
**Main.yaml**
#########CREATING VM#########
---
- hosts: localhost
vars:
http_port: 80
max_clients: 200
vars_files:
- vars.yaml
tasks:
- name: create VM
os_server:
name: "{{ item.name }}"
state: present
image: "{{ item.image }}"
boot_from_volume: True
security_groups: ssh
flavor: "{{ item.flavor }}"
key_name: mykey
region_name: "{{ lookup('env', 'OS_REGION_NAME') }}"
nics:
- net-name: private
wait: yes
register: instances
with_items: "{{ instance_definitions }}"
############################################
- name: whait 15 seconds
pause: seconds=15
when: instances.changed
######DEBUG#################################
- name: display results
debug:
msg: "{{ item }}"
with_items: "{{ instances.results }}"
############################################
- name: Add new VM to ansible Inventory
add_host:
name: "{{ item.server.name}}"
ansible_host: "{{item.server.public_v4}}"
ansible_user: "{{ansible_user}}"
ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
groups: just_created
with_items: "{{ instances.results }}"
**vars.yaml**
---
instance_definitions:
- { name: Debian Jessie, image: Debian Jessie 8, flavor: c1.small, loginame: debian }
- { name: Debian Stretch, image: Debian Stretch 9, flavor: c1.small, loginame: debian }
This is what magic variables are for.
All your hosts will be in a list:
groups['just_created']
The example below illustrates how to create in-memory hosts and list them. The magic sauce is that add_hosts needs to be in a separate play.
---
- name: adding host playbook
hosts: localhost
connection: local
tasks:
- name: add host to in-memory inventory
add_host:
name: awesome_host_name
groups: in_memory
- name: checking hosts
hosts: in_memory
connection: local
gather_facts: false
tasks:
- debug: var=group_names
- debug: msg="{{ inventory_hostname }}"
- debug: var=hostvars[inventory_hostname]
You can print the the content of the inventory file using from the playbook using:
- debug: msg="the hosts are is {{lookup('file', '/etc/ansible/hosts') }}"
Alternatively, you can list the hosts from command line using:
ansible --list-hosts all
or use this command from the playbook:
tasks:
- name: list hosts
command: ansible --list-hosts all
register: hosts
- debug:
msg: "{{hosts.stdout_lines}}"

How to switch Ansible playbook to another host when calling second playbook

I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.

Ansible wait for initializing host before doing tasks in playbook

In my host, it needs time (about 20s) to initialize CLI session,... before doing cli
I'm trying to do command by playbook ansible:
---
- name: Run show sub command
hosts: em
gather_facts: no
remote_user: duypn
tasks:
- name: wait for SSH to respond on all hosts
local_action: wait_for host=em port=22 delay=60 state=started
- name: run show sub command
raw: show sub id=xxxxx;display=term-type
After 10 mins, ansible gives me output which is not the result of show sub command :(
...
["CLI Session initializing..", "Autocompleter initializing..", "CLI>This session has been IDLE for too long.",
...
I'm glad to hear your suggestion. Thank you :)
I don't have a copy-paste solution for you but one thing I learned is to put a sleep after ssh is 'up' to allow the machine to finish it's work. This might give you a nudge in the right direction.
- name: Wait for SSH to come up
local_action: wait_for
host={{ item.public_ip }}
port=22
state=started
with_items: "{{ ec2.instances }}"
- name: waiting for a few seconds to let the machine start
pause:
seconds: 20
So I had the same problem and this is how I solved it:
---
- name: "Get instances info"
ec2_instance_facts:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
region: "{{ aws_region }}"
filters:
vpc-id : "{{ vpc_id }}"
private-ip-address: "{{ ansible_ssh_host }}"
delegate_to: localhost
register: my_ec2
- name: "Waiting for {{ hostname }} to response"
wait_for:
host: "{{ item.public_ip_address }}"
state: "{{ state }}"
sleep: 1
port: 22
delegate_to: localhost
with_items:
- "{{ my_ec2.instances }}"
That is the playbook named aws_ec2_status.
The playbook I ran looks like this:
---
# Create an ec2 instance in aws
- hosts: nodes
gather_facts: false
serial: 1
vars:
state: "present"
roles:
- aws_create_ec2
- hosts: nodes
gather_facts: no
vars:
state: "started"
roles:
- aws_ec2_status
The reason I split the create and check to two different playbooks is because I want the playbook to create instances and not wait for one to be ready before creating the other.
But if the second instance is depended on the first one so you should combine them.
FYI Let me know if you want to see my aws_create_ec2 playbook.

Ansible ec2_remote_facts

I am using ec2_remote_facts to find the ec2 facts of an instance based on the tags.
- name: Search ec2
ec2_remote_facts:
filters:
"tag:Name": "{{hostname}}"
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{aws_region}}"
register: ec2_info
Now I want to get the instance id of the instance id of that particular host and store it in a variable to use it in my playbook.
Can someone please help in finding or extracting the instance id.
Thanks,
You should use your registered variable 'ec2_info':
- debug: var=ec2_info
- debug: var=item
with_items: ec2_info.instance_ids
# register new variable
- set_fact:
instance_ids: ec2_info.instance_ids
- add_host: hostname={{ item.public_ip }} groupname=ec2hosts
with_items: ec2_info.instances
- name: wait for instances to listen on port:22
wait_for:
state=started
host={{ item.public_dns_name }}
port=22
with_items: ec2_info.instances
# Connect to the node and gather facts,
# including the instance-id. These facts
# are added to inventory hostvars for the
# duration of the playbook's execution
- hosts: ec2hosts
gather_facts: True
user: ec2-user
sudo: True
tasks:
# fetch instance data from the metadata servers in ec2
- ec2_facts:
# show all known facts for this host
- debug: var=hostvars[inventory_hostname]
# just show the instance-id
- debug: msg="{{ hostvars[inventory_hostname]['ansible_ec2_instance-id'] }}"

Ansible: How to stop an EC2 instance started by a different playbook

If I start an EC2 instance (or set of instances) in Ansible, how can I later refer to that instance (or set) and terminate them?
This is across playbooks. So, in one playbook I've already started an instance, then later, I want to terminate those instances with another playbook.
Thanks.
You will need to do something like this (working example):
terminate.yml:
- name: terminate single instance
hosts: all
tasks:
- action: ec2_facts
- name: terminating single instance
local_action:
module: ec2
state: 'absent'
region: us-east-1
instance_ids: "{{ ansible_ec2_instance_id }}"
to terminate instance at address instance.example.com:
$ ansible-playbook -i instance.example.com, terminate.yml
The ec2 facts module will query the metadata service on the instance to get the instance ID.
The ec2 module is used to terminate the instance by its ID.
Note the ec2_facts module needs to run on the instance(s) that you want to terminate and you will probably want to use an inventory file or dynamic inventory to lookup the instance(s) by tag instead of addressing them by hostname.
Well, if we are going to use the dynamic inventory, then I recommend using count_tags and exact_count with the ec2 module when creating the instances with create.yml:
---
- hosts: localhost
connection: local
gather_facts: false
vars_files: { ./env.yml }
tasks:
- name: Provision a set of instances
ec2:
instance_type: "{{ item.value.instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ item.value.vpc_subnet_id }}"
tenancy: "{{ tenancy }}"
group_id: "{{ group_id }}"
key_name: "{{ key_name }}"
wait: true
instance_tags:
Name: "{{ env_id }}"
Type: "{{ item.key }}"
count_tag:
Type: "{{ item.key }}"
exact_count: "{{ item.value.count }}"
with_dict: "{{ servers }}"
register: ec2
The env.yml file has all those variables, and the servers dictionary:
---
env_id: JaxDemo
key_name: JaxMagicKeyPair
image: "ami-xxxxxxxx"
region: us-east-1
group_id: "sg-xxxxxxxx,sg-yyyyyyyy,sg-zzzzzzzz"
tenancy: dedicated
servers:
app:
count: 2
vpc_subnet_id: subnet-xxxxxxxx
instance_type: m3.medium
httpd:
count: 1
vpc_subnet_id: subnet-yyyyyyyy
instance_type: m3.medium
oracle:
count: 1
vpc_subnet_id: subnet-zzzzzzzz
instance_type: m4.4xlarge
Now, if you want to change the number of servers, just change the count in the servers dictionary. If you want to delete all of them, we all the counts to 0.
Or, if you prefer, copy the create.yml file to delete_all.yml, and replace
exact_count: "{{ item.value.count }}"
with
exact_count: 0
I'll extend the answer of jarv with a working example, in which you can terminate single or a group of server(s) with ansible playbook.
Let's suppose, you want to terminate a instance(s), that fall under the delete group in your hosts file:
[delete]
X.X.X.X # IP address of an EC2 instance
Now your playbook will be look like this(in my case, I named it "ec2_terminate.yml"):
---
- hosts: delete
gather_facts: True
user: ubuntu
sudo: True
tasks:
# fetch instance data from the metadata servers in ec2
- ec2_facts:
# just show the instance-id
- debug: msg= "{{ hostvars[inventory_hostname]['ansible_ec2_instance_id'] }}"
- hosts: delete
gather_facts: True
connection: local
vars:
region: "us-east-1"
tasks:
- name: destroy all instances
ec2: state='absent'
region={{ region }}
instance_ids={{ item }}
wait=true
with_items: hostvars[inventory_hostname]['ansible_ec2_instance_id']
Now run this playbook like this:
ansible-playbook -i hosts ec2_terminate.yml
When you originally create those instances, make sure to uniquely identify/tag them.
Then on the following runs you can use dynamic inventory script to populate your host list, as seen here http://docs.ansible.com/intro_dynamic_inventory.html and then terminate instances matching your tags.

Resources