Create ec2 instance within vpc with two nics through ansible - amazon-ec2

after a day of googling I decided to give up and ask here:
I'm still quite new to ansible and AWS, hence this question might lack background information which I'm happy to provide on request.
What I'm trying to achieve:
Write an Ansible playbook, which creates a new ec2 instance within my vpc.
This instance shall be provided with two new nics, eth0 and eth1. These nics should be associated with each one specific security group.
My playbook so far is built like this:
Create eth0
Create eth1
Create ec2 instance
My problem:
All documentation says I need to provide the eni-id of the interface I'd like to attach to my instance. I can't provide this, since the ids do not exist yet. The only thing I know is the name of the interfaces so I was trying to get the id of the interfaces separately, which also didn't work.
If I try to register the output of the creation of eth{0,1} in ansible, the whole output is stored and breaks the validation later when calling the variables in the section of the instance creation. Same with the extra step after the creation process.
More about the setup:
Running a VPC in AWS, hosts inside VPC are only accessible through VPN.
Running Ansible on macOS:
ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/Users/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/7.1.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.1 (main, Dec 23 2022, 09:40:27) [Clang 14.0.0 (clang-1400.0.29.202)] (/usr/local/Cellar/ansible/7.1.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Playbook:
---
- name: Create ec2 instances
hosts: localhost
gather_facts: false
tasks:
# Block is a Group of Tasks combined together
- name: Get Info Block
block:
- name: Get Running instance Info
ec2_instance_info:
register: ec2info
- name: Print info
debug: var="ec2info.instances"
# By specifying always on the tag,
# I let this block to run all the time by module_default
# this is for security to net create ec2 instances accidentally
tags: ['always', 'getinfoonly']
- name: Create ec2 block
block:
- amazon.aws.ec2_vpc_net_info:
vpc_ids: vpc-XXXXXXXXXXXXXXXXX
- name: Create ec2 network interface eth0_lan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
description: "eth0_lan_{{ vpc_hostname }}"
subnet_id: "{{ vpc_subnetid }}"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: "sg-XXXXXXXXXXXXXXXXX"
- name: Get id of eth0
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
register: eth0
- name: Create ec2 network interface eth1_wan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
description: "eth1_wan_{{ vpc_hostname }}"
subnet_id: "subnet-XXXXXXXXXXXXXXXXX"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: 'sg-XXXXXXXXXXXXXXXXX'
- name: Get id of eth1
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
register: eth1
- name: Launch ec2 instances
tags: ec2-create
amazon.aws.ec2_instance:
name: "{{ vpc_hostname }}"
region: "eu-central-1"
key_name: "MyKey"
image_id: ami-XXXXXXXXXXXXXXXXX
vpc_subnet_id: "{{ vpc_subnetid }}"
instance_type: "{{ instance_type }}"
volumes:
- device_name: /dev/sda1
ebs:
volume_size: 30
delete_on_termination: true
network:
interfaces:
- id: "{{ eth0 }}"
- id: "{{ eth1 }}"
detailed_monitoring: true
register: ec2
delegate_to: localhost
# By specifying never on the tag of this block,
# I let this block to run only when explicitely being called
tags: ['never', 'ec2-create']
(If you're wondering about the tag-stuff, this comes from the tutorial I followed initially, credits: https://www.middlewareinventory.com/blog/ansible-aws-ec2/#How_Ansible_works_with_AWS_EC2_Setup_Boto_for_Ansible)
The execution of the ansible playbook breaks with this error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter NetworkInterfaces[1].NetworkInterfaceId, value: {'changed': True, 'interface': {'id': 'eni-XXXXXXXXXXXXXXXXX', 'subnet_id': 'subnet-XXXXXXXXXXXXXXXXX', 'vpc_id': 'vpc-XXXXXXXXXXXXXXXXX', 'description': 'somedescription', 'owner_id': 'XXXXXXXXXXXXXXXXX', 'status': 'available', 'mac_address': 'xx:xx:xx:xx:xx:xx, 'private_ip_address': 'xx.xx.xxx.xx', 'source_dest_check': True, 'groups': {'sg-XXXXXXXXXXXXXXXXX': 'SGNAME'}, 'private_ip_addresses': [{'private_ip_address': 'xx.xx.xxx.xx', 'primary_address': True}], 'name': 'eth1_wan_<fqdn>', 'tags': {'Name': 'eth1_wan_<fqdn>'}}, 'failed': False}, type: <class 'dict'>, valid types: <class 'str'>

So, my colleague and I managed to solve this: Use "{{ eth0.interface.id }}" instead. However, all instances continue to terminate themselves on creation. In AWS console: Client.InternalError. This is related to kms/ebs encryption which I turned on by default today.
It turned out, that I tried to use an asymmetrical customer-managed-key for default ebs encryption. As soon as I replaced this with a symmetrical one, it worked and the instances would start.

Related

Create Ec2 instance and install Python package using Ansible Playbook

I've created an ansible playbook for create an Ec2 instance and Install python to connect the server via ssh.
The Playbook successfully created an EC2 instance but it doesn't install the python on newly created Ec2 Instance instead it installing the python on my Master Machine.
Can someone help me to resolve this.
My Code:
- hosts: localhost
remote_user: ubuntu
become: yes
tasks:
- name: I'm going to create a Ec2 instance
ec2:
key_name: yahoo
instance_type: t2.micro
region: "ap-south-1"
image: ami-0860c9429baba6ad2
count: 1
vpc_subnet_id: subnet-aa84fbe6
assign_public_ip: yes
tags:
- creation
- name: Going to Install Python
apt:
name: python
state: present
tags:
- Web
- name: Start the service
service:
name: python
state: started
tags:
- start
As is shown in the fine manual, ec2: operations should be combined with add_host: in order to "pivot" off of localhost over to the newly provisioned instance, which one cannot add to an inventory file because it doesn't exist yet
- hosts: localhost
gather_facts: no
tasks:
- name: I'm going to create a Ec2 instance
ec2:
...etc etc
register: run_instances
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: just_launched
loop: "{{ run_instances.instances }}"
- name: Wait for SSH to come up
delegate_to: "{{ item.public_ip }}"
wait_for_connection:
delay: 60
timeout: 320
loop: "{{ run_instances.instances }}"
- hosts: just_launched
# and now you can do fun things to those ec2 instances
I suspect your question about "installing python" is unrelated, but just in case: if you genuinely really do need to add python to those ec2 instances, you cannot use most ansible modules to do that because they're written in python. But that is what the raw: module is designed to fix
- hosts: just_launched
# you cannot gather facts without python, either
gather_facts: no
tasks:
- raw: |
echo watch out for idempotency issues with your playbook using raw
yum install python
- name: NOW gather the facts
setup:
- command: echo and now you have working ansible modules again
I also find your invocation of service: { name: python, state: started } suspicious, but I would guess it has not run yet due to your question here

Provisioning multiple spot instances with instance_interruption_behavior using Ansible

I know it seems to relate to other questions asked in the past but I feel it's not.
You will be the judge of that.
I've been using Ansible for 1.5 years now and I'm aware it's not the best tool to provision infrastructure , Yet for my needs it suffice.
I'm using AWS as my cloud provider
Is there any way to cause Ansible to provision multiple spot instances of the same type( similar to the count attribute in the ec2 ansible module) with the instance_interruption_behavior set to stop(behavior) instead of the default terminate?
My Goal:
Set up multiple EC2 instances with different ec2_spot_requests to Amazon. ( spot request per instance)
But instead of using the default instance_interruption_behavior which is terminate, I wish to set it to stop(behavior )
What I've done so far:
I'm aware that Ansible has the following module to handle ec2 infrastructure .
Mainly the
ec2.
https://docs.ansible.com/ansible/latest/modules/ec2_module.html
Not to be confused with the
ec2_instance module.
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
I could use the ec2 module to set up spot instances , As I've done so far, as follows:
ec2:
key_name: mykey
region: us-west-1
instance_type: "{{ instance_type }}"
image: "{{ instance_ami }}"
wait: yes
wait_timeout: 100
spot_price: "{{ spot_bid }}"
spot_wait_timeout: 600
spot_type: persistent
group_id: "{{ created_sg.group_id }}"
instance_tags:
Name: "Name"
count: "{{ my_count }}"
vpc_subnet_id: " {{ subnet }}"
assign_public_ip: yes
monitoring: no
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ volume_size }}"
Pre Problem:
The above sample module is nice and clean and allow me to setup spot instance according to my requirements , by using the ec2 module and the count attribute.
Yet, It doesn't allow me to set the instance_interruption_behavior to stop( unless I'm worng), It just defaults to terminate.
Other modules:
To be released on Ansible version 2.8.0 ( Currently only available in develop) , There is a new module called: ec2_launch_template
https://docs.ansible.com/ansible/devel/modules/ec2_launch_template_module.html
This module gives you more attributes , which allows you ,with the combination of ec2_instance module to set the desired instance_interruption_behavior to stop(behavior).
This is what I did:
- hosts: 127.0.0.1
connection: local
gather_facts: True
vars_files:
- vars/main.yml
tasks:
- name: "create security group"
include_tasks: tasks/create_sg.yml
- name: "Create launch template to support interruption behavior 'STOP' for spot instances"
ec2_launch_template:
name: spot_launch_template
region: us-west-1
key_name: mykey
instance_type: "{{ instance_type }}"
image_id: "{{ instance_ami }}"
instance_market_options:
market_type: spot
spot_options:
instance_interruption_behavior: stop
max_price: "{{ spot_bid }}"
spot_instance_type: persistent
monitoring:
enabled: no
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_type: gp2
volume_size: "{{ manager_instance_volume_size }}"
register: my_template
- name: "setup up a single spot instance"
ec2_instance:
instance_type: "{{ instance_type }}"
tags:
Name: "My_Spot"
launch_template:
id: "{{ my_template.latest_template.launch_template_id }}"
security_groups:
- "{{ created_sg.group_id }}"
The main Problem:
As it shown from the above snippet code, the ec2_launch_template Ansible module does not support the count attribute and the ec2_instance module doesn't allow it either.
Is there any way cause ansible to provision multiple instances of the same type( similar to the count attribute in the ec2 ansible module)?
In general if you are managing lots of copies of EC2 instances at once you probably want to either use Auto-scaling groups or EC2 Fleets.
There is an ec2_asg module that allows you to manage auto-scaling groups via ansible. This would allow you to deploy multiple EC2 instances based on that launch template.
Another technique to solve this issue would be to use the cloudformation or terraform modules if AWS cloudformation templates or terraform plans support what you want to do. This would also let you use Jinja2 templating syntax to do some clever template generation before supplying the infrastructure templates if necessary.

referencing the instance id of an EC2 resource just created in Ansible

I am having a difficult time referencing the instance id of an EC2 resource that I have just created. After I create it, I would immediately like to terminate it. My code is below:
Thank you,
Bill
---
- name: Example of provisioning servers
hosts: 127.0.0.1
connection: local
tasks:
- name: set_fact1
set_fact: foo = 1
- name: Create security group
local_action:
module: ec2_group
name: ep2
description: Access to the Episode2 servers
region: us-east-1
rules:
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: 0.0.0.0/0
- name: Launch instances
local_action:
module: ec2
instance_tags:
Name: server1
Env: myenv
region: us-east-1
keypair: ansiblekeypair
group: ep2
instance_type: m1.small
image: ami-1aae3a0c
count: 1
wait: yes
register: ec2
- name: Terminate instances that were previously launched
ec2:
state: absent
region: us-east-1
instance_ids: "{{ ec2.instance_id[0] }}"
with_items: ec2
You need to reference ec2.instances[0].id.
It is useful to use - debug: var=ec2 task or run playbook with -vv switch to see detailed values of registered variables and check what properties are available for use.
If you want to terminate the instances just launched (assuming that their tags are specific) you could also use the exact_count facility:
exact_count: 0
count_tag:
- Name: server1
- Env: myenv
As per the doc (highlights are mine):
exact_count
An integer value which indicates how many instances that match the count_tag parameter should be running. Instances are either created or terminated based on this value.
and count_tag:
Used with exact_count to determine how many nodes based on a specific tag criteria should be running. [...]
It would give something like:
- name: Terminate instances
local_action:
module: ec2
instance_tags:
Name: server1
Env: myenv
region: us-east-1
keypair: ansiblekeypair
exact_count: 0
count_tag:
- Name: server1
- Env: myenv
wait: yes

How to stop old instance from ELB in ansible?

I have playbook which creates and add instance to Load balancer, what is the way I can remove/stop old instance already assigned to ELB, I want to make sure that we stop the old instance first and then new 1s are added or vice verse.
I am using AWS ELB and EC2 instance
I hope that might help you. Once you have the instance id then you can do whatever you want, I am just removing it from the ELB but you can use the ec2 module to remove it.
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Get the facts about the ELB
ec2_elb_facts:
names: rbgeek-dev-web-elb
region: "eu-west-1"
register: elb_facts
- name: Instance(s) ID that are currently register to the ELB
debug:
msg: "{{ elb_facts.elbs.0.instances }}"
- name: Tag the Old instances as zombie
ec2_tag:
resource: "{{ item }}"
region: "eu-west-1"
state: present
tags:
Instance_Status: "zombie"
with_items: "{{ elb_facts.elbs.0.instances }}"
- name: Refresh the ec2.py cache
shell: ./inventory/ec2.py --refresh-cache
changed_when: no
- name: Refresh inventory
meta: refresh_inventory
# must check this step with ec2_remote_facts_module.html ##http://docs.ansible.com/ansible/ec2_remote_facts_module.html
- hosts: tag_Instance_Status_zombie
gather_facts: no
tasks:
- name: Get the facts about the zombie instances
ec2_facts:
- name: remove instance by instance id
ec2:
state: absent
region: "eu-west-1"
instance_ids: "{{ ansible_ec2_instance_id }}"
wait: true
delegate_to: localhost

Problems with using Ansible 1.9 to assign floating IPs to OpenStack Nova instance

I'm trying to configure Ansible 1.9 to launch some OpenStack Nova instances.
For each instance, I'm attempting to auto-assign a floating IP, connecting it to a public segment.
When I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
- neutron_floating_ip:
state=present
login_username=tenant_2_user
login_password=hello
login_tenant_name=tenant_2
network_name=ext-net
instance_name=Web01
I get : ERROR: neutron_floating_ip is not a legal parameter in an Ansible task or handler
And when I try this:
- name: launch Web01 instance
hosts: csc
tasks:
- nova_compute:
state: present
login_username: tenant_2_user
login_password: hello
login_tenant_name: tenant_2
name: Web01
auth_url: http://mylocalhost:5000/v2.0/
region_name: RegionOne
image_id: 95c5f4f2-84f2-47fb-a466-3c786677d21c
wait_for: 200
flavor_id: b772be9a-98cd-446f-879e-89baef600ff0
security_groups: default
auto_floating_ip: yes
msg: unsupported parameter for module: auto_floating_ip
Here is my Ansible version: ansible --version
ansible 1.9
configured module search path = /usr/share/ansible
What can I do to have Ansible assign these floating IPs?
-Eugene
I got it working. You dont need to use
auto_floating_ip: yes
just use
floating_ip_pools:
- Your-external/public-network-id
Hope this helps.

Resources