I'm trying to make my playbooks aware of their environment; that being production or stage.
My playbook first pulls the ec2_tag for its environment successfully and set to the variable 'env'. On the ec2 provision task, I would like to use the 'env' variable in the vpc and subnet lookups, so it grabs the correct one based on its environment. Below I tried adding the 'env' variable into the lookups, but I'm getting a full string back instead of it pulling it from my variable file.
Do I need to add some logic checks within the {{}}?
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: "{{ env + '_subnet_public1' }}"
group_id: "{{ env + '_app_group' }}"
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2
If you want to do variable interpolation then you'll need to use the hostvars[inventory_hostname]['variable'] syntax.
The FAQs cover this briefly but for your use case you should be able to do something like:
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: hostvars[inventory_hostname][env + '_subnet_public1']
group_id: hostvars[inventory_hostname][env + '_app_group']
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2
Related
The following Ansible code creates 1 EC2 instance with 1 tag
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: first_group
instance_tags:
name: first_group
wait: yes
register: ec2
- debug:
var: ec2
What is the best way to rework this code to create 2 EC2 instances, each one with a different tag?
Mind you, this is OLD STUFF, so it may not work any more. But I defined the inventory first, in group_vars/all.yml:
env_id: EE_TEST3_SLS
env_type: EE
zones:
- zone: presentation
servers:
- type: rp
count: 1
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: application
servers:
- type: app
count: 2
instance_type: m4.xlarge
- type: sre
count: 0
instance_type: m4.xlarge
- type: httpd
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 4
mirror_groups: 2
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: data
servers:
- type: batch
count: 2
instance_type: m4.xlarge
- type: terracotta_tmc
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 2
mirror_groups: 1
instance_type: m4.xlarge
- type: oracle
count: 1
instance_type: m4.4xlarge
- type: marklogic
count: 1
instance_type: m4.4xlarge
- type: alfresco
count: 1
instance_type: m4.4xlarge
Then, the playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Provision a set of instances
ec2:
instance_type: "{{ item.1.instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnets[item.0.zone] }}"
tenancy: "{{ tenancy }}"
group_id: "{{ group_id }}"
key_name: "{{ key_name }}"
wait: true
instance_tags:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
count_tag:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
exact_count: "{{ item.1.count }}"
with_subelements:
- "{{ zones }}"
- servers
register: ec2_results
The idea is that you set all the tags in the inventory, and just reference them in the task.
Provided you'd simply want to create the same instances only replacing values first_group with second_group in the module parameters, the following (totally untested) example should put you on track.
If you need to go further, as already advised in comments, have a look at the ansible loop documentation
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: "{{ item }}"
instance_tags:
name: "{{ item }}"
wait: yes
register: ec2
loop:
- first_group
- second_group
- debug:
var: ec2.results
Using ansible I am trying to create ec2 instances and attach an extra network interface to each instance so that they will have two private IP addresses. However, for some reason, it seems that the ec2_eni module can create network interfaces, but will not attach them to the instances specified. What am I doing wrong? Below is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create new servers
ec2:
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group_id: "{{ sec_group }}"
assign_public_ip: yes
wait: true
key_name: '{{ key }}'
instance_type: t2.micro
image: '{{ ami }}'
exact_count: '{{ count }}'
count_tag:
Name: "{{ server_name }}"
instance_tags:
Name: "{{ server_name }}"
register: ec2
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: allocate new elastic IPs and associate it with instances
ec2_eip:
region: "{{ region }}"
device_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
register: eips
- name: Show eip instance json data
debug:
msg: "{{ eips['results'] }}"
- ec2_eni:
subnet_id: "{{ subnet }}"
state: present
secondary_private_ip_address_count: 1
security_groups: "{{ sec_group }}"
region: "{{ region }}"
device_index: 1
description: "test-eni"
instance_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
The strange thing is that the ec2_eni task succeeds, saying that it has attached the network interface to each instance when in reality it just creates the network interface and then does nothing with it.
As best I can tell, since attached defaults to None, but the module docs say:
Specifies if network interface should be attached or detached from instance. If ommited, attachment status won't change
then the code does what they claim and skips the attachment step.
This appears to be a bug in the documentation, which claims the default is 'yes' but is not accurate.
I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
I have two tasks that differ by one (optional) value. Depending on where the playbook is running the optional value is either present or not. At the moment I have two copies of the task executing depending on a when clause but this is error prone as the task in question has a lot of other mandatory config and the one optional entry is lost in the mix.
- name: Launch configuration for prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcprod
tags:
- asg
when:
env == "prod"
- name: Launch configuration for non-prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
spot_price: "{{ admin_bid_price }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcdev
tags:
- asg
when:
env != "prod"
- name: Set lc for stg or prod
set_fact: lc="{{ lcprod }}"
when:
env == "prod"
- name: Set lc for none stg or prod
set_fact: lc="{{ lcdev }}"
when:
env != "prod"
Ideally, this example could be collapsed down into a single task (i.e. without the subsequent set_fact task) and the spot_price entry is either present or not depending on the env.
I've tried setting spot_price to null or "" without success.
To me it seems like you want:
spot_price: "{{ (env == "prod") | ternary(omit, admin_bid_price) }}"
See: Ansible Filters - Omitting Parameters and Other Useful Filters
It appears to be easy to set delete on termination for new volumes attached to an ec2 instance, but how do I set that on the boot volume?
Are you talking about this option to be set over root/boot volume during instance creation, if so here is the example code:
- name: Create EC2 Instance(s)
ec2:
region: "{{ vpc_region }}"
group: "{{ ec2_security_group_name }}"
keypair: "{{ ec2_key_name }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ami_find_result.results[0].ami_id }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
wait: True
wait_timeout: 600
user_data: |
#!/bin/sh
sudo apt-get install nginx -y
instance_tags:
Name: "{{ vpc_name }}-{{ SERVER_ROLE }}"
Environment: "{{ ENVIRONMENT }}"
Server_Role: "{{ SERVER_ROLE }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: yes
For details, please check this link
Hope that might help you