Provisioning multiple spot instances with instance_interruption_behavior using Ansible - amazon-ec2

I know it seems to relate to other questions asked in the past but I feel it's not.
You will be the judge of that.
I've been using Ansible for 1.5 years now and I'm aware it's not the best tool to provision infrastructure , Yet for my needs it suffice.
I'm using AWS as my cloud provider
Is there any way to cause Ansible to provision multiple spot instances of the same type( similar to the count attribute in the ec2 ansible module) with the instance_interruption_behavior set to stop(behavior) instead of the default terminate?
My Goal:
Set up multiple EC2 instances with different ec2_spot_requests to Amazon. ( spot request per instance)
But instead of using the default instance_interruption_behavior which is terminate, I wish to set it to stop(behavior )
What I've done so far:
I'm aware that Ansible has the following module to handle ec2 infrastructure .
Mainly the
ec2.
https://docs.ansible.com/ansible/latest/modules/ec2_module.html
Not to be confused with the
ec2_instance module.
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
I could use the ec2 module to set up spot instances , As I've done so far, as follows:
ec2:
key_name: mykey
region: us-west-1
instance_type: "{{ instance_type }}"
image: "{{ instance_ami }}"
wait: yes
wait_timeout: 100
spot_price: "{{ spot_bid }}"
spot_wait_timeout: 600
spot_type: persistent
group_id: "{{ created_sg.group_id }}"
instance_tags:
Name: "Name"
count: "{{ my_count }}"
vpc_subnet_id: " {{ subnet }}"
assign_public_ip: yes
monitoring: no
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ volume_size }}"
Pre Problem:
The above sample module is nice and clean and allow me to setup spot instance according to my requirements , by using the ec2 module and the count attribute.
Yet, It doesn't allow me to set the instance_interruption_behavior to stop( unless I'm worng), It just defaults to terminate.
Other modules:
To be released on Ansible version 2.8.0 ( Currently only available in develop) , There is a new module called: ec2_launch_template
https://docs.ansible.com/ansible/devel/modules/ec2_launch_template_module.html
This module gives you more attributes , which allows you ,with the combination of ec2_instance module to set the desired instance_interruption_behavior to stop(behavior).
This is what I did:
- hosts: 127.0.0.1
connection: local
gather_facts: True
vars_files:
- vars/main.yml
tasks:
- name: "create security group"
include_tasks: tasks/create_sg.yml
- name: "Create launch template to support interruption behavior 'STOP' for spot instances"
ec2_launch_template:
name: spot_launch_template
region: us-west-1
key_name: mykey
instance_type: "{{ instance_type }}"
image_id: "{{ instance_ami }}"
instance_market_options:
market_type: spot
spot_options:
instance_interruption_behavior: stop
max_price: "{{ spot_bid }}"
spot_instance_type: persistent
monitoring:
enabled: no
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_type: gp2
volume_size: "{{ manager_instance_volume_size }}"
register: my_template
- name: "setup up a single spot instance"
ec2_instance:
instance_type: "{{ instance_type }}"
tags:
Name: "My_Spot"
launch_template:
id: "{{ my_template.latest_template.launch_template_id }}"
security_groups:
- "{{ created_sg.group_id }}"
The main Problem:
As it shown from the above snippet code, the ec2_launch_template Ansible module does not support the count attribute and the ec2_instance module doesn't allow it either.
Is there any way cause ansible to provision multiple instances of the same type( similar to the count attribute in the ec2 ansible module)?

In general if you are managing lots of copies of EC2 instances at once you probably want to either use Auto-scaling groups or EC2 Fleets.
There is an ec2_asg module that allows you to manage auto-scaling groups via ansible. This would allow you to deploy multiple EC2 instances based on that launch template.
Another technique to solve this issue would be to use the cloudformation or terraform modules if AWS cloudformation templates or terraform plans support what you want to do. This would also let you use Jinja2 templating syntax to do some clever template generation before supplying the infrastructure templates if necessary.

Related

Create ec2 instance within vpc with two nics through ansible

after a day of googling I decided to give up and ask here:
I'm still quite new to ansible and AWS, hence this question might lack background information which I'm happy to provide on request.
What I'm trying to achieve:
Write an Ansible playbook, which creates a new ec2 instance within my vpc.
This instance shall be provided with two new nics, eth0 and eth1. These nics should be associated with each one specific security group.
My playbook so far is built like this:
Create eth0
Create eth1
Create ec2 instance
My problem:
All documentation says I need to provide the eni-id of the interface I'd like to attach to my instance. I can't provide this, since the ids do not exist yet. The only thing I know is the name of the interfaces so I was trying to get the id of the interfaces separately, which also didn't work.
If I try to register the output of the creation of eth{0,1} in ansible, the whole output is stored and breaks the validation later when calling the variables in the section of the instance creation. Same with the extra step after the creation process.
More about the setup:
Running a VPC in AWS, hosts inside VPC are only accessible through VPN.
Running Ansible on macOS:
ansible --version
ansible [core 2.14.1]
config file = None
configured module search path = ['/Users/mg/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/7.1.0/libexec/lib/python3.11/site-packages/ansible
ansible collection location = /Users/mg/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.11.1 (main, Dec 23 2022, 09:40:27) [Clang 14.0.0 (clang-1400.0.29.202)] (/usr/local/Cellar/ansible/7.1.0/libexec/bin/python3.11)
jinja version = 3.1.2
libyaml = True
Playbook:
---
- name: Create ec2 instances
hosts: localhost
gather_facts: false
tasks:
# Block is a Group of Tasks combined together
- name: Get Info Block
block:
- name: Get Running instance Info
ec2_instance_info:
register: ec2info
- name: Print info
debug: var="ec2info.instances"
# By specifying always on the tag,
# I let this block to run all the time by module_default
# this is for security to net create ec2 instances accidentally
tags: ['always', 'getinfoonly']
- name: Create ec2 block
block:
- amazon.aws.ec2_vpc_net_info:
vpc_ids: vpc-XXXXXXXXXXXXXXXXX
- name: Create ec2 network interface eth0_lan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
description: "eth0_lan_{{ vpc_hostname }}"
subnet_id: "{{ vpc_subnetid }}"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: "sg-XXXXXXXXXXXXXXXXX"
- name: Get id of eth0
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth0_lan_{{ vpc_hostname }}"
register: eth0
- name: Create ec2 network interface eth1_wan
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
description: "eth1_wan_{{ vpc_hostname }}"
subnet_id: "subnet-XXXXXXXXXXXXXXXXX"
state: present
delete_on_termination: true
region: eu-central-1
security_groups: 'sg-XXXXXXXXXXXXXXXXX'
- name: Get id of eth1
delegate_to: localhost
tags: ec2-create
amazon.aws.ec2_eni:
name: "eth1_wan_{{ vpc_hostname }}"
register: eth1
- name: Launch ec2 instances
tags: ec2-create
amazon.aws.ec2_instance:
name: "{{ vpc_hostname }}"
region: "eu-central-1"
key_name: "MyKey"
image_id: ami-XXXXXXXXXXXXXXXXX
vpc_subnet_id: "{{ vpc_subnetid }}"
instance_type: "{{ instance_type }}"
volumes:
- device_name: /dev/sda1
ebs:
volume_size: 30
delete_on_termination: true
network:
interfaces:
- id: "{{ eth0 }}"
- id: "{{ eth1 }}"
detailed_monitoring: true
register: ec2
delegate_to: localhost
# By specifying never on the tag of this block,
# I let this block to run only when explicitely being called
tags: ['never', 'ec2-create']
(If you're wondering about the tag-stuff, this comes from the tutorial I followed initially, credits: https://www.middlewareinventory.com/blog/ansible-aws-ec2/#How_Ansible_works_with_AWS_EC2_Setup_Boto_for_Ansible)
The execution of the ansible playbook breaks with this error:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Invalid type for parameter NetworkInterfaces[1].NetworkInterfaceId, value: {'changed': True, 'interface': {'id': 'eni-XXXXXXXXXXXXXXXXX', 'subnet_id': 'subnet-XXXXXXXXXXXXXXXXX', 'vpc_id': 'vpc-XXXXXXXXXXXXXXXXX', 'description': 'somedescription', 'owner_id': 'XXXXXXXXXXXXXXXXX', 'status': 'available', 'mac_address': 'xx:xx:xx:xx:xx:xx, 'private_ip_address': 'xx.xx.xxx.xx', 'source_dest_check': True, 'groups': {'sg-XXXXXXXXXXXXXXXXX': 'SGNAME'}, 'private_ip_addresses': [{'private_ip_address': 'xx.xx.xxx.xx', 'primary_address': True}], 'name': 'eth1_wan_<fqdn>', 'tags': {'Name': 'eth1_wan_<fqdn>'}}, 'failed': False}, type: <class 'dict'>, valid types: <class 'str'>
So, my colleague and I managed to solve this: Use "{{ eth0.interface.id }}" instead. However, all instances continue to terminate themselves on creation. In AWS console: Client.InternalError. This is related to kms/ebs encryption which I turned on by default today.
It turned out, that I tried to use an asymmetrical customer-managed-key for default ebs encryption. As soon as I replaced this with a symmetrical one, it worked and the instances would start.

How to extract individual ips of ec2 instances launch using ansible when count is more than 1

I am launching aws ec2 2 instances using ansible using count:2 please check below playbook
- name: Create an EC2 instance
ec2:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ key }}"
key_name: "{{ keypair }}"
region: "{{ region }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: yes
count: 2
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: "{{ assign_public_ip }}"
register: ec2
- name: Add the newly created 1 EC2 instance(s) to webserver group
lineinfile: dest=inventory
insertafter='^\[webserver\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
- name: add newly created remaining ec2 instance to db group
lineinfile: dest=inventory
insertafter='^\[db-server\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
Here i want to add one ip to webserver host group & remaining to db host group but its not working with above playbook please help me to achieve same?
i dont wnt to use add_host here.
Since you are using AWS, have you considered using the aws_ec2 plugin for dynamic inventory?
As long as you are tagging your instances correctly, and you set up the yaml file, it would do what you want it.
Otherwise, your register: ec2 has two elements in it. The way (if it worked) you are looping through ec2 would add both to each group. You would need to add a when condition to match the something like the tag/subnet/cidr to know which server to add to which group.
One way to help see what the return is would be do print out the ec2 variable:
- debug: var=ec2

how to launch multiple instances with different subnet ids using ansible

I'm trying to use Ansible to create two instances, each instance must have different subnet ids. I'm using exact_count with tag Name to keep track of instances. The main issue is that I am not able to understand how to provide two different subnet ids in the same playbook.
This task can be used in order to create the multiple ec2 instances with different subnets
- name: 7. Create EC2 server
ec2:
image: "{{ image }}"
wait: true
instance_type: t2.micro
group_id: "{{ security_group.group_id }}"
vpc_subnet_id: "{{ item }}"
key_name: "{{ key_name }}"
count: 1
region: us-east-1
with_items:
- "{{ subnet1.subnet.id }}"
- "{{ subnet2.subnet.id }}"
register: ec2

Find the list of all the AWS ALB target groups in which instance is registered

For ELB, if we want to remove the instance from all the elbs, we just need to pass the instance id to the elb_instance module and again if we want to add the instance back in the same playbook, it give us a magic variable ec2_elbs, we can iterate over this variable and add back or register the instance to all the elbs which he was registered previously.
I didn't find any module, which I can use to find the list of the target groups in which the instance is register. can somebody point if know about it.
Here is elb_target_group module which is in preview mode or you can make use of a python script and boto to find the list of instances in a particular targetgroup
I have found the way to dynamically removed the instance(s) from all the target groups that it has been registered:
- name: Collect facts about EC2 Instance
action: ec2_metadata_facts
- name: Get the list of all target groups to which Instance(s) registered
local_action:
module: elb_target_facts
instance_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
register: alb_target_facts
become: no
- name: Deregister the Instance from all target groups
local_action:
module: elb_target
target_group_arn: "{{ item.target_group_arn }}"
target_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
target_status_timeout: 300
target_status: unused
state: absent
loop: "{{ alb_target_facts.instance_target_groups }}"
become: no
Because I have registered the target group(s) information in a variable, I can use it again to add it back:
- name: Register the Instance back to the target group(s)
local_action:
module: elb_target
target_group_arn: "{{ item.target_group_arn }}"
target_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
target_status: healthy
target_status_timeout: 300
state: present
loop: "{{ alb_target_facts.instance_target_groups }}"
become: no

Is it possible to create an instance-store AMI using the ec2_ami module?

I am using Ansible 2.1.3.0 at the moment and I am trying to figure out how to make an instance-store AMI using the ec2_ami module.
It looks like it is not possible to do this if the instance has instance-store root - I am getting the Instance does not have a volume attached at root (null)" no matter what I do.
So I am wondering how could I make an intance-store AMI (using the ec2_ami module) from an EBS-backed instance? The documentation does not allow me to map the first volume to be ami, and second needs to be ephemeral0. I was going through the ec2_ami module source code and it seems like it has no support for this but I may have overlooked something...
When I do curl http://169.254.169.254/latest/meta-data/block-device-mapping/ on the source EC2 instance, I am getting the following:
ami
ephemeral0
I'm on 2.2 and using ec2 rather than ec2_ami, but this would accomplish what you're looking for...
- name: Launch instance
ec2:
aws_access_key: "{{ aws.access_key }}"
aws_secret_key: "{{ aws.secret_key }}"
instance_tags: "{{ instance.tags }}"
count: 1
state: started
key_name: "{{ instance.key_name }}"
group: "{{ instance.security_group }}"
instance_type: "{{ instance.type }}"
image: "{{ instance.image }}"
wait: true
region: "{{ aws.region }}"
vpc_subnet_id: "{{ subnets_id }}"
assign_public_ip: "{{ instance.public_ip }}"
instance_profile_name: "{{ aws.iam_role }}"
instance_initiated_shutdown_behavior: terminate
volumes:
- device_name: /dev/sdb
volume_type: ephemeral
ephemeral: ephemeral0
register: ec2
Pay special attention to the instance_initiated_shutdown_behavior param as the default is stop, which is incompatible with instance-store based systems, you'll need to set it to terminate to work.

Resources