referencing the instance id of an EC2 resource just created in Ansible - amazon-ec2

I am having a difficult time referencing the instance id of an EC2 resource that I have just created. After I create it, I would immediately like to terminate it. My code is below:
Thank you,
Bill
---
- name: Example of provisioning servers
hosts: 127.0.0.1
connection: local
tasks:
- name: set_fact1
set_fact: foo = 1
- name: Create security group
local_action:
module: ec2_group
name: ep2
description: Access to the Episode2 servers
region: us-east-1
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- name: Launch instances
local_action:
module: ec2
instance_tags:
Name: server1
Env: myenv
region: us-east-1
keypair: ansiblekeypair
group: ep2
instance_type: m1.small
image: ami-1aae3a0c
count: 1
wait: yes
register: ec2
- name: Terminate instances that were previously launched
ec2:
state: absent
region: us-east-1
instance_ids: "{{ ec2.instance_id[0] }}"
with_items: ec2

You need to reference ec2.instances[0].id.
It is useful to use - debug: var=ec2 task or run playbook with -vv switch to see detailed values of registered variables and check what properties are available for use.

If you want to terminate the instances just launched (assuming that their tags are specific) you could also use the exact_count facility:
exact_count: 0
count_tag:
- Name: server1
- Env: myenv
As per the doc (highlights are mine):
exact_count
An integer value which indicates how many instances that match the count_tag parameter should be running. Instances are either created or terminated based on this value.
and count_tag:
Used with exact_count to determine how many nodes based on a specific tag criteria should be running. [...]
It would give something like:
- name: Terminate instances
local_action:
module: ec2
instance_tags:
Name: server1
Env: myenv
region: us-east-1
keypair: ansiblekeypair
exact_count: 0
count_tag:
- Name: server1
- Env: myenv
wait: yes

Related

Create ec2 instance within vpc with two nics through ansible

after a day of googling I decided to give up and ask here:
I'm still quite new to ansible and AWS, hence this question might lack background information which I'm happy to provide on request.
What I'm trying to achieve:
Write an Ansible playbook, which creates a new ec2 instance within my vpc.
This instance shall be provided with two new nics, eth0 and eth1. These nics should be associated with each one specific security group.
My playbook so far is built like this:
Create eth0
Create eth1
Create ec2 instance
My problem:
All documentation says I need to provide the eni-id of the interface I'd like to attach to my instance. I can't provide this, since the ids do not exist yet. The only thing I know is the name of the interfaces so I was trying to get the id of the interfaces separately, which also didn't work.
If I try to register the output of the creation of eth{0,1} in ansible, the whole output is stored and breaks the validation later when calling the variables in the section of the instance creation. Same with the extra step after the creation process.
More about the setup:
Running a VPC in AWS, hosts inside VPC are only accessible through VPN.
Running Ansible on macOS:
ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/Users/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/7.1.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.1 (main, Dec 23 2022, 09:40:27) [Clang 14.0.0 (clang-1400.0.29.202)] (/usr/local/Cellar/ansible/7.1.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Playbook:
---
- name: Create ec2 instances
hosts: localhost
gather_facts: false
tasks:
# Block is a Group of Tasks combined together
- name: Get Info Block
block:
- name: Get Running instance Info
ec2_instance_info:
register: ec2info
- name: Print info
debug: var="ec2info.instances"
# By specifying always on the tag,
# I let this block to run all the time by module_default
# this is for security to net create ec2 instances accidentally
tags: ['always', 'getinfoonly']
- name: Create ec2 block
block:
- amazon.aws.ec2_vpc_net_info:
vpc_ids: vpc-XXXXXXXXXXXXXXXXX
- name: Create ec2 network interface eth0_lan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
description: "eth0_lan_{{ vpc_hostname }}"
subnet_id: "{{ vpc_subnetid }}"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: "sg-XXXXXXXXXXXXXXXXX"
- name: Get id of eth0
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
register: eth0
- name: Create ec2 network interface eth1_wan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
description: "eth1_wan_{{ vpc_hostname }}"
subnet_id: "subnet-XXXXXXXXXXXXXXXXX"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: 'sg-XXXXXXXXXXXXXXXXX'
- name: Get id of eth1
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
register: eth1
- name: Launch ec2 instances
tags: ec2-create
amazon.aws.ec2_instance:
name: "{{ vpc_hostname }}"
region: "eu-central-1"
key_name: "MyKey"
image_id: ami-XXXXXXXXXXXXXXXXX
vpc_subnet_id: "{{ vpc_subnetid }}"
instance_type: "{{ instance_type }}"
volumes:
- device_name: /dev/sda1
ebs:
volume_size: 30
delete_on_termination: true
network:
interfaces:
- id: "{{ eth0 }}"
- id: "{{ eth1 }}"
detailed_monitoring: true
register: ec2
delegate_to: localhost
# By specifying never on the tag of this block,
# I let this block to run only when explicitely being called
tags: ['never', 'ec2-create']
(If you're wondering about the tag-stuff, this comes from the tutorial I followed initially, credits: https://www.middlewareinventory.com/blog/ansible-aws-ec2/#How_Ansible_works_with_AWS_EC2_Setup_Boto_for_Ansible)
The execution of the ansible playbook breaks with this error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter NetworkInterfaces[1].NetworkInterfaceId, value: {'changed': True, 'interface': {'id': 'eni-XXXXXXXXXXXXXXXXX', 'subnet_id': 'subnet-XXXXXXXXXXXXXXXXX', 'vpc_id': 'vpc-XXXXXXXXXXXXXXXXX', 'description': 'somedescription', 'owner_id': 'XXXXXXXXXXXXXXXXX', 'status': 'available', 'mac_address': 'xx:xx:xx:xx:xx:xx, 'private_ip_address': 'xx.xx.xxx.xx', 'source_dest_check': True, 'groups': {'sg-XXXXXXXXXXXXXXXXX': 'SGNAME'}, 'private_ip_addresses': [{'private_ip_address': 'xx.xx.xxx.xx', 'primary_address': True}], 'name': 'eth1_wan_<fqdn>', 'tags': {'Name': 'eth1_wan_<fqdn>'}}, 'failed': False}, type: <class 'dict'>, valid types: <class 'str'>
So, my colleague and I managed to solve this: Use "{{ eth0.interface.id }}" instead. However, all instances continue to terminate themselves on creation. In AWS console: Client.InternalError. This is related to kms/ebs encryption which I turned on by default today.
It turned out, that I tried to use an asymmetrical customer-managed-key for default ebs encryption. As soon as I replaced this with a symmetrical one, it worked and the instances would start.

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

How can I run different tasks on different hosts?

I was trying to create an ansible playbook in such a way that the playbook first create an EC2 instance using host as local host
After that the instance created using above task must return IP of the new instance and on the newly created instance I wanted to install splunk can someone help me
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: create a new ec2 key pair, returns generated private key
ec2_key:
name: my_keypair3
force: false
region: us-east-1
register: ec2_key_result
- name: Save private key
copy: content="{{ ec2_key_result.key.private_key }}" dest="./akey.pem" mode=0600
when: ec2_key_result.changed
- name: Provision a set of instances
ec2:
key_name: my_keypair3
group: SplunkSecurityGroup
instance_type: t2.micro
image: ami-04b9e92b5572fa0d1
wait: true
region: us-east-1
exact_count: 1
count_tag:
Name: Demo
instance_tags:
Name: v3
- name: Downloading Splunk
get_url:
url: "https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.0.1&product=splunk&filename=splunk-8.0.1-6db836e2fb9e-linux-2.6-amd64.deb&wget=true"
dest: ~/splunk.deb
checksum: md5:29723caba24ca791c6d30445f5dfe6
See Splunkenizer for Ansible configuration for deploying Splunk

How to stop old instance from ELB in ansible?

I have playbook which creates and add instance to Load balancer, what is the way I can remove/stop old instance already assigned to ELB, I want to make sure that we stop the old instance first and then new 1s are added or vice verse.
I am using AWS ELB and EC2 instance
I hope that might help you. Once you have the instance id then you can do whatever you want, I am just removing it from the ELB but you can use the ec2 module to remove it.
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Get the facts about the ELB
ec2_elb_facts:
names: rbgeek-dev-web-elb
region: "eu-west-1"
register: elb_facts
- name: Instance(s) ID that are currently register to the ELB
debug:
msg: "{{ elb_facts.elbs.0.instances }}"
- name: Tag the Old instances as zombie
ec2_tag:
resource: "{{ item }}"
region: "eu-west-1"
state: present
tags:
Instance_Status: "zombie"
with_items: "{{ elb_facts.elbs.0.instances }}"
- name: Refresh the ec2.py cache
shell: ./inventory/ec2.py --refresh-cache
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
# must check this step with ec2_remote_facts_module.html ##http://docs.ansible.com/ansible/ec2_remote_facts_module.html
- hosts: tag_Instance_Status_zombie
gather_facts: no
tasks:
- name: Get the facts about the zombie instances
ec2_facts:
- name: remove instance by instance id
ec2:
state: absent
region: "eu-west-1"
instance_ids: "{{ ansible_ec2_instance_id }}"
wait: true
delegate_to: localhost

Problems with using Ansible 1.9 to assign floating IPs to OpenStack Nova instance

I'm trying to configure Ansible 1.9 to launch some OpenStack Nova instances.
For each instance, I'm attempting to auto-assign a floating IP, connecting it to a public segment.
When I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
- neutron_floating_ip:
state=present
login_username=tenant_2_user
login_password=hello
login_tenant_name=tenant_2
network_name=ext-net
instance_name=Web01
I get : ERROR: neutron_floating_ip is not a legal parameter in an Ansible task or handler
And when I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
auto_floating_ip: yes
msg: unsupported parameter for module: auto_floating_ip
Here is my Ansible version: ansible --version
ansible 1.9
configured module search path = /usr/share/ansible
What can I do to have Ansible assign these floating IPs?
-Eugene
I got it working. You dont need to use
auto_floating_ip: yes
just use
floating_ip_pools:
- Your-external/public-network-id
Hope this helps.

Resources