I'm trying to use Ansible to create two instances, each instance must have different subnet ids. I'm using exact_count with tag Name to keep track of instances. The main issue is that I am not able to understand how to provide two different subnet ids in the same playbook.
This task can be used in order to create the multiple ec2 instances with different subnets
- name: 7. Create EC2 server
ec2:
image: "{{ image }}"
wait: true
instance_type: t2.micro
group_id: "{{ security_group.group_id }}"
vpc_subnet_id: "{{ item }}"
key_name: "{{ key_name }}"
count: 1
region: us-east-1
with_items:
- "{{ subnet1.subnet.id }}"
- "{{ subnet2.subnet.id }}"
register: ec2
Related
I am launching aws ec2 2 instances using ansible using count:2 please check below playbook
- name: Create an EC2 instance
ec2:
aws_access_key: "{{ access_key }}"
aws_secret_key: "{{ key }}"
key_name: "{{ keypair }}"
region: "{{ region }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: yes
count: 2
vpc_subnet_id: "{{ vpc_subnet_id }}"
assign_public_ip: "{{ assign_public_ip }}"
register: ec2
- name: Add the newly created 1 EC2 instance(s) to webserver group
lineinfile: dest=inventory
insertafter='^\[webserver\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
- name: add newly created remaining ec2 instance to db group
lineinfile: dest=inventory
insertafter='^\[db-server\]$'
line="{{ item.private_ip }} {{hoststring}}"
state=present
with_items: "{{ ec2.instances }}"
Here i want to add one ip to webserver host group & remaining to db host group but its not working with above playbook please help me to achieve same?
i dont wnt to use add_host here.
Since you are using AWS, have you considered using the aws_ec2 plugin for dynamic inventory?
As long as you are tagging your instances correctly, and you set up the yaml file, it would do what you want it.
Otherwise, your register: ec2 has two elements in it. The way (if it worked) you are looping through ec2 would add both to each group. You would need to add a when condition to match the something like the tag/subnet/cidr to know which server to add to which group.
One way to help see what the return is would be do print out the ec2 variable:
- debug: var=ec2
I know it seems to relate to other questions asked in the past but I feel it's not.
You will be the judge of that.
I've been using Ansible for 1.5 years now and I'm aware it's not the best tool to provision infrastructure , Yet for my needs it suffice.
I'm using AWS as my cloud provider
Is there any way to cause Ansible to provision multiple spot instances of the same type( similar to the count attribute in the ec2 ansible module) with the instance_interruption_behavior set to stop(behavior) instead of the default terminate?
My Goal:
Set up multiple EC2 instances with different ec2_spot_requests to Amazon. ( spot request per instance)
But instead of using the default instance_interruption_behavior which is terminate, I wish to set it to stop(behavior )
What I've done so far:
I'm aware that Ansible has the following module to handle ec2 infrastructure .
Mainly the
ec2.
https://docs.ansible.com/ansible/latest/modules/ec2_module.html
Not to be confused with the
ec2_instance module.
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
I could use the ec2 module to set up spot instances , As I've done so far, as follows:
ec2:
key_name: mykey
region: us-west-1
instance_type: "{{ instance_type }}"
image: "{{ instance_ami }}"
wait: yes
wait_timeout: 100
spot_price: "{{ spot_bid }}"
spot_wait_timeout: 600
spot_type: persistent
group_id: "{{ created_sg.group_id }}"
instance_tags:
Name: "Name"
count: "{{ my_count }}"
vpc_subnet_id: " {{ subnet }}"
assign_public_ip: yes
monitoring: no
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ volume_size }}"
Pre Problem:
The above sample module is nice and clean and allow me to setup spot instance according to my requirements , by using the ec2 module and the count attribute.
Yet, It doesn't allow me to set the instance_interruption_behavior to stop( unless I'm worng), It just defaults to terminate.
Other modules:
To be released on Ansible version 2.8.0 ( Currently only available in develop) , There is a new module called: ec2_launch_template
https://docs.ansible.com/ansible/devel/modules/ec2_launch_template_module.html
This module gives you more attributes , which allows you ,with the combination of ec2_instance module to set the desired instance_interruption_behavior to stop(behavior).
This is what I did:
- hosts: 127.0.0.1
connection: local
gather_facts: True
vars_files:
- vars/main.yml
tasks:
- name: "create security group"
include_tasks: tasks/create_sg.yml
- name: "Create launch template to support interruption behavior 'STOP' for spot instances"
ec2_launch_template:
name: spot_launch_template
region: us-west-1
key_name: mykey
instance_type: "{{ instance_type }}"
image_id: "{{ instance_ami }}"
instance_market_options:
market_type: spot
spot_options:
instance_interruption_behavior: stop
max_price: "{{ spot_bid }}"
spot_instance_type: persistent
monitoring:
enabled: no
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_type: gp2
volume_size: "{{ manager_instance_volume_size }}"
register: my_template
- name: "setup up a single spot instance"
ec2_instance:
instance_type: "{{ instance_type }}"
tags:
Name: "My_Spot"
launch_template:
id: "{{ my_template.latest_template.launch_template_id }}"
security_groups:
- "{{ created_sg.group_id }}"
The main Problem:
As it shown from the above snippet code, the ec2_launch_template Ansible module does not support the count attribute and the ec2_instance module doesn't allow it either.
Is there any way cause ansible to provision multiple instances of the same type( similar to the count attribute in the ec2 ansible module)?
In general if you are managing lots of copies of EC2 instances at once you probably want to either use Auto-scaling groups or EC2 Fleets.
There is an ec2_asg module that allows you to manage auto-scaling groups via ansible. This would allow you to deploy multiple EC2 instances based on that launch template.
Another technique to solve this issue would be to use the cloudformation or terraform modules if AWS cloudformation templates or terraform plans support what you want to do. This would also let you use Jinja2 templating syntax to do some clever template generation before supplying the infrastructure templates if necessary.
For ELB, if we want to remove the instance from all the elbs, we just need to pass the instance id to the elb_instance module and again if we want to add the instance back in the same playbook, it give us a magic variable ec2_elbs, we can iterate over this variable and add back or register the instance to all the elbs which he was registered previously.
I didn't find any module, which I can use to find the list of the target groups in which the instance is register. can somebody point if know about it.
Here is elb_target_group module which is in preview mode or you can make use of a python script and boto to find the list of instances in a particular targetgroup
I have found the way to dynamically removed the instance(s) from all the target groups that it has been registered:
- name: Collect facts about EC2 Instance
action: ec2_metadata_facts
- name: Get the list of all target groups to which Instance(s) registered
local_action:
module: elb_target_facts
instance_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
register: alb_target_facts
become: no
- name: Deregister the Instance from all target groups
local_action:
module: elb_target
target_group_arn: "{{ item.target_group_arn }}"
target_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
target_status_timeout: 300
target_status: unused
state: absent
loop: "{{ alb_target_facts.instance_target_groups }}"
become: no
Because I have registered the target group(s) information in a variable, I can use it again to add it back:
- name: Register the Instance back to the target group(s)
local_action:
module: elb_target
target_group_arn: "{{ item.target_group_arn }}"
target_id: "{{ ansible_ec2_instance_id }}"
region: "{{ ansible_ec2_placement_region }}"
target_status: healthy
target_status_timeout: 300
state: present
loop: "{{ alb_target_facts.instance_target_groups }}"
become: no
I am using Ansible 2.1.3.0 at the moment and I am trying to figure out how to make an instance-store AMI using the ec2_ami module.
It looks like it is not possible to do this if the instance has instance-store root - I am getting the Instance does not have a volume attached at root (null)" no matter what I do.
So I am wondering how could I make an intance-store AMI (using the ec2_ami module) from an EBS-backed instance? The documentation does not allow me to map the first volume to be ami, and second needs to be ephemeral0. I was going through the ec2_ami module source code and it seems like it has no support for this but I may have overlooked something...
When I do curl http://169.254.169.254/latest/meta-data/block-device-mapping/ on the source EC2 instance, I am getting the following:
ami
ephemeral0
I'm on 2.2 and using ec2 rather than ec2_ami, but this would accomplish what you're looking for...
- name: Launch instance
ec2:
aws_access_key: "{{ aws.access_key }}"
aws_secret_key: "{{ aws.secret_key }}"
instance_tags: "{{ instance.tags }}"
count: 1
state: started
key_name: "{{ instance.key_name }}"
group: "{{ instance.security_group }}"
instance_type: "{{ instance.type }}"
image: "{{ instance.image }}"
wait: true
region: "{{ aws.region }}"
vpc_subnet_id: "{{ subnets_id }}"
assign_public_ip: "{{ instance.public_ip }}"
instance_profile_name: "{{ aws.iam_role }}"
instance_initiated_shutdown_behavior: terminate
volumes:
- device_name: /dev/sdb
volume_type: ephemeral
ephemeral: ephemeral0
register: ec2
Pay special attention to the instance_initiated_shutdown_behavior param as the default is stop, which is incompatible with instance-store based systems, you'll need to set it to terminate to work.
I would like to create and provision Amazon EC2 machines with a help of Ansible.
Now, I get the following error:
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyPair.NotFound: The key pair '~/.keys/EC2-Kibi-Enterprise-Deployment.pem' does not exist"}
But the .pem key exists:
$ ls -lh ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
-r-------- 1 sergey sergey 1.7K Apr 6 09:56 /home/sergey/.keys/EC2-Kibi-Enterprise-Deployment.pem
And it was created in EU (Ireland) region.
Here is my playbook:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key }}"
region: "{{ region }}"
zone: "{{ zone }}"
group: "{{ group }}"
instance_type: "{{ machine }}"
image: "{{ image }}"
wait: true
wait_timeout: 500
#vpc_subnet_id: "{{ subnet }}"
#assign_public_ip: yes
register: ec2
- name: Amazon EC2 | Wait for SSH to come up
wait_for:
host: "{{ item.public_ip }}"
port: 22
delay: 10
timeout: 60
state: started
with_items: "{{ ec2.instances }}"
- name: Amazon EC2 | Add hosts to the kibi_servers in-memory inventory group
add_host: hostname={{ item.public_ip }} groupname=kibi_servers
with_items: "{{ ec2.instances }}"
### END
### Provision roles
- name: Amazon EC2 | Provision new instances
hosts: kibi_servers
become: yes
roles:
- common
- java
- elasticsearch
- logstash
- nginx
- kibi
- supervisor
### END
And my var file:
count: 2
region: eu-west-1
zone: eu-west-1a
group: default
image: ami-d1ec01a6
machine: t2.medium
subnet: subnet-3a2aa952
key: ~/.keys/EC2-Kibi-Enterprise-Deployment.pem
What is wrong with the .pem file here?
The key parameter for the ec2 module is looking for the key pair name that has been already uploaded to AWS, not a local key.
If you want to get Ansible to upload a public key you can use the ec2_key module.
So your playbook would look like this:
--
- name: Setup servers on Amazon EC2 machines
hosts: localhost
gather_facts: no
tasks:
- include_vars: group_vars/all/ec2_vars.yml
### Create Amazon EC2 key pair
- name: Amazon EC2 | Create Key Pair
ec2_key:
name: "{{ key_name }}"
region: "{{ region }}"
key_material: "{{ item }}"
with_file: /path/to/public_key.id_rsa.pub
### Create Amazon EC2 instances
- name: Amazon EC2 | Create instances
ec2:
count: "{{ count }}"
key_name: "{{ key_name }}"
...
Do not specify extension for the key. So that key name should be " EC2-Kibi-Enterprise-Deployment " only. Ansible doesn't care if your key is on your local machine at this stage. It verifies if it exists on your AWS account. Go to 'EC2 > Key Pairs' section in your AWS account and you'll see keys are listed without file extensions.
The solution has been found. EC2 doesn't like when you put a full path for the .pem key file.
So, I moved EC2-Kibi-Enterprise-Deployment.pem into ~/.ssh, added it to the authentication agent with ssh-add using:
ssh-add ~/.ssh/EC2-Kibi-Enterprise-Deployment.pem
And corrected the key line in my var file to key: EC2-Kibi-Enterprise-Deployment.pem
The same if you use EC2 cli tools, don't specify a full path to the key file.
ec2-run-instances ami-d1ec01a6 -t t2.medium --region eu-west-1 --key EC2-Kibi-Enterprise-Deployment.pem
While providing Key in variable don't give file extension (.pem). Just give file name.
For example: akshay.pem is my key then in vars filoe just provide akshay as key.