Provision of multiple EC2 instances with Ansible - amazon-ec2

I am trying to provision multiple AWS EC2 machines using Ansible. I wanted to make the "availabilityZone" and "tag_name" dynamic. Please find the task below. If I change the count to 2, i should be able to run the same task without any modifications to create two instances.
- name: Provision an instance
ec2:
key_name: "{{ keyPairName }}"
instance_type: m3.medium
image: "{{ amiId }}"
region: us-east-1
zone: us-east-1{{ item.0 }}
group_id: "{{ securityGroup }}"
vpc_subnet_id: "{{ subnet }}"
wait: true
count: 1
instance_tags:
Name: esl-master{{ item.1 }}-{{ deployenv }}
register: ec2_master
with_together:
- [ 'a', 'b', 'c' ]
- [ 1, 2, 3 ]
when: isMaster
With the above configuration, i am getting the below error.
failed: [localhost] (item=[u'b', 2]) => {"failed": true, "item": ["b", 2], "msg": "Instance creation failed => InvalidParameterValue: Value (us-east-1b) for parameter availabilityZone is invalid. Subnet 'subnet-f22551d9' is in the availability zone us-east-1a"}
failed: [localhost] (item=[u'c', 3]) => {"failed": true, "item": ["c", 3], "msg": "Instance creation failed => InvalidParameterValue: Value (us-east-1c) for parameter availabilityZone is invalid. Subnet 'subnet-f22551d9' is in the availability zone us-east-1a"}
Is there a better way of doing this, since i want to take advantage of 'Zone' parameter ?

For every iteration you try to set the same vpc_subnet_id: "{{ subnet }}" and you get an error in the second and third one as the subnet ID, was already used.
You need to use different subnet IDs, e.g.
vpc_subnet_id: "{{ subnet }}{{ item.0 }}"

Related

Search arrays inside ansible msg output

Ansible-newbie here.
I am struggling with understanding how to check if a value exists in an array returned from a previous task.
Im trying to search if a volume exists in any array of volumes returned from a search:
- name: Get volume pools data
community.libvirt.virt_pool:
command: facts
changed_when: false
- debug: msg="{{ ansible_libvirt_pools }}.stdout_lines"
this returns output like this:
ok: [server] => {
"msg": "{'default': {'status': 'running', 'size_total': '75125227520', 'size_used': '14622126080', 'size_available': '60503101440', 'autostart': 'yes', 'persistent': 'yes', 'state': 'active', 'path': '/var/lib/libvirt/images', 'type': 'dir', 'uuid': '5f6e9ba1-6a50-4313-9aa6-0981448aff0a', 'volume_count': 5, 'volumes': ['centos-8-stream.raw', 'ubuntu-20.04-server-cloudimg-amd64.img', 'ubuntu-server-20.04.raw', 'vyos-1.4-rolling-202201080317-amd64.iso', 'CentOS-Stream-GenericCloud-8-20201019.1.x86_64.qcow2']}, 'infra0_pool_0': {'status': 'running', 'size_total': '11998523817984', 'size_used': '294205259776', 'size_available': '11704318558208', 'autostart': 'yes', 'persistent': 'yes', 'state': 'active', 'path': '/dev/almalinux_images1', 'type': 'logical', 'uuid': '62882128-003d-472c-ac89-d0118cc992c6', 'volume_count': 6, 'volumes': ['admin0.admin_base_vol', 'test01.test_base_vol', 'test00.test_base_vol', 'root', 'vyos0', 'swap'], 'format': 'lvm2'}}.stdout_lines"
}
I need help understanding how to search a specific volumes array to see if a value already exists, and to ignore:
- name: Create VM volume
command: |
virsh vol-create-as {{ pool_name }} {{ vm_volume }} {{ vm_size }}
when {{ vm_volume}} not in ansible_libvirt_pools.{{ pool_name }}.volumes
Would appreciate any help on how to do this correctly.
Im getting warnings on Jinja templating for the 'when' statement, and I don't think its working as expected.
ansible_libvirt_pools:
default:
autostart: 'yes'
path: /var/lib/libvirt/images
persistent: 'yes'
size_available: '60503101440'
size_total: '75125227520'
size_used: '14622126080'
state: active
status: running
type: dir
uuid: 5f6e9ba1-6a50-4313-9aa6-0981448aff0a
volume_count: 5
volumes:
- centos-8-stream.raw
- ubuntu-20.04-server-cloudimg-amd64.img
- ubuntu-server-20.04.raw
- vyos-1.4-rolling-202201080317-amd64.iso
- CentOS-Stream-GenericCloud-8-20201019.1.x86_64.qcow2
infra0_pool_0:
autostart: 'yes'
format: lvm2
path: /dev/almalinux_images1
persistent: 'yes'
size_available: '11704318558208'
size_total: '11998523817984'
size_used: '294205259776'
state: active
status: running
type: logical
uuid: 62882128-003d-472c-ac89-d0118cc992c6
volume_count: 6
volumes:
- admin0.admin_base_vol
- test01.test_base_vol
- test00.test_base_vol
- root
- vyos0
- swap
Q: "How to search a specific volumes array to see if a value already exists."
A: For example, given the list of volumes that should exist
vm_volumes: [swap, export]
the task below
- debug:
msg: "Missing volumes in {{ item.key }}: {{ _missing }}"
loop: "{{ ansible_libvirt_pools|dict2items }}"
vars:
_missing: "{{ vm_volumes|difference(item.value.volumes) }}"
gives (abridged)
msg: 'Missing volumes in default: [''swap'', ''export'']'
msg: 'Missing volumes in infra0_pool_0: [''export'']'
If you want to find existing volumes the task below
- debug:
msg: "Present volumes in {{ item.key }}: {{ _present }}"
loop: "{{ ansible_libvirt_pools|dict2items }}"
vars:
_present: "{{ vm_volumes|intersect(item.value.volumes) }}"
gives (abridged)
msg: 'Present volumes in default: []'
msg: 'Present volumes in infra0_pool_0: [''swap'']'

How can I choose a set of predefined variables from a dictionary using variable keys?

Im trying to launch ec2 instances in groups (using group_vars) dynamically in different availability zones with different visibility. I defined a dictionary with relevant az, subnet_id, etc. Visibility is defined in group_vars and I pass in the desired zone in the command with -e arg_zone=a or -e arg_zone=b
---
- name: create aws instance
hosts:
- web_sb
- web_qa
- web_prod
vars:
regions:
uswest2:
private:
a:
subnet_id: "subnet-123"
availability_zone: "us-west-2a"
assign_public_ip: no
b:
subnet_id: "subnet-456"
availability_zone: "us-west-2b"
assign_public_ip: no
public:
a:
subnet_id: "subnet-abc"
availability_zone: "us-west-2a"
assign_public_ip: yes
b:
subnet_id: "subnet-def"
availability_zone: "us-west-2b"
assign_public_ip: yes
tasks:
- debug:
var: regions.uswest2.{{ visibility }}.{{ arg_zone }}.subnet_id
- debug:
var: regions.uswest2.{{ visibility }}.{{ arg_zone }}.availability_zone
- debug:
var: regions.uswest2.{{ visibility }}.{{ arg_zone }}.assign_public_ip
- name: "ec2 instance created from ami {{ ami_id }}"
ec2:
instance_type: "{{ instance_type }}"
image: "{{ arg_ami_id }}"
key_name: "{{ key_name }}"
instance_tags: {}
wait: yes
group: "{{ security_groups }}"
count: 1
volumes:
- device_name: "/dev/sda1"
volume_type: "gp2"
volume_size: "{{ os_volume_size }}"
delete_on_termination: yes
vpc_subnet_id: regions.uswest2.{{ visibility }}.{{ arg_zone }}.subnet_id
assign_public_ip: no #regions.uswest2.{{ visibility }}.{{ arg_zone }}.assign_public_ip
region: "us-west-2"
zone: regions.uswest2.{{ visibility }}.{{ arg_zone }}.availability_zone
The debug statements work as I would expect, but when I try to use those same values in the ec2 task they are not accessing the values. I'm getting errors like
argument assign_public_ip is of type <type 'str'> and we were unable to convert to bool: The value 'regions.uswest2.private.a.assign_public_ip' is not a valid boolean. Valid booleans include: 0, 'on', 'f', 'false', 1, 'no', 'n', '1', '0', 't', 'y', 'off', 'yes', 'true'"
and
The subnet ID 'regions.uswest2.private.a.subnet_id' does not exist
what am I doing wrong? (am I going about this completely the wrong way? I realize I could define these in host_vars but I am creating hosts dynamically so im using group_vars to define different classes of hosts) but maybe there is another way I havent considered
is there a way to format the variables to access these values from the predefined dictionary?
The debug statements work as I would expect, but when I try to use those same values in the ec2 task they are not accessing the values
That's because the debug: var: actually secretly converts to msg: {{ ... }} wrapped around the var: literal, so there are implicit jinja2 mustaches around that expression, causing it to be evaluated
what am I doing wrong?
You are confusing literal text with a jinja2 expression
is there a way to format the variables to access these values from the predefined dictionary?
assign_public_ip: '{{ regions.uswest2[visibility][arg_zone].assign_public_ip }}'

How to fix "Undefind error" while parsing a dictionary in ansible registered variable?

I'm creating some ec2 instances from a specific image, then trying to get a list of disks attached to these instances.
The problem is when I try to loop over the registered variable from the create instance task, I got an error
I have tried the solution from this post but with no luck
ansible get aws ebs volume id which already exist
- name: create instance
ec2:
region: us-east-1
key_name: xxxxxxx
group: xxxxxx
instance_type: "{{ instance_type }}"
image: "{{ instance_ami }}"
wait: yes
wait_timeout: 500
instance_tags:
Name: "{{ item.name }}"
vpc_subnet_id: "{{ item.subnet }}"
register: ec2
loop: "{{ nodes }}"
- name: show attached volumes Ids
debug:
msg: "{{ item.block_device_mapping | map(attribute='volume_id') }}"
loop: "{{ ec2.results[0].instances }}"
while printing only msg: "{{ item.block_device_mapping }}" I get:
"msg": {
"/dev/sda1": {
"delete_on_termination": true,
"status": "attached",
"volume_id": "vol-xxxxxxx"
},
"/dev/xvdb": {
"delete_on_termination": false,
"status": "attached",
"volume_id": "vol-xxxxxx"
},
"/dev/xvdc": {
"delete_on_termination": false,
"status": "attached",
"volume_id": "vol-xxxxxx"
}
}
but when I use
msg: "{{ item.block_device_mapping | map(attribute='volume_id') }}"
I get this error:
"msg": "[AnsibleUndefined, AnsibleUndefined, AnsibleUndefined]"
The task below
- debug:
msg: "{{ item }}: {{ block_device_mapping[item].volume_id }}"
loop: "{{ block_device_mapping.keys() }}"
gives the {device: volume_id} tuples (grep msg):
"msg": "/dev/xvdb: vol-xxxxxx"
"msg": "/dev/xvdc: vol-xxxxxx"
"msg": "/dev/sda1: vol-xxxxxxx"
To iterate instances use json_query. The task below
- debug:
msg: "{{ item.block_device_mapping|json_query('*.volume_id') }}"
loop: "{{ ec2.results[0].instances }}"
gives:
"msg": [
"vol-xxxxxx",
"vol-xxxxxx",
"vol-xxxxxxx"
]
and the task below with zip
- debug:
msg: "{{ item.block_device_mapping.keys()|zip(
item.block_device_mapping|json_query('*.volume_id'))|list }}"
loop: "{{ ec2.results[0].instances }}"
gives the list of lists:
"msg": [
[
"/dev/xvdb",
"vol-xxxxxx"
],
[
"/dev/xvdc",
"vol-xxxxxx"
],
[
"/dev/sda1",
"vol-xxxxxxx"
]
]
and the task below with dict
- debug:
msg: "{{ dict (item.block_device_mapping.keys()|zip(
item.block_device_mapping|json_query('*.volume_id'))) }}"
loop: "{{ ec2.results[0].instances }}"
gives the tuples
"msg": {
"/dev/sda1": "vol-xxxxxxx",
"/dev/xvdb": "vol-xxxxxx",
"/dev/xvdc": "vol-xxxxxx"
}
The mistake:
So the main mistake you made was thinking of item.block_device_mapping as if it was the map you wanted to work with instead of a map within a map. That is, the keys that you have to first find would, according to the msg that you printed /dev/sda, /dev/xvdb and /dev/xvdc.
So first you'd have to make an array with the keys of the parent map. In the question you can see the necessary code to make Jinja get you the necessary strings:
# The necessary filter to get that array should be something along these lines
item['block_device_mapping'] | list() | join(', ')
You should register that to then loop over,giving you the keys you need to access those elements' attributes.

"assign_public_ip: no" still creating public IPs: Ansible and EC2

I have defined assign_public_ip: no to my ec2 creation task:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
Yet, the spawned instances are being assigned public IP’s:
TASK [Add new instance to host group] ******************************************
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-45-61.us-west-2.compute.internal', u'public_ip': u'35.167.242.55', u'private_ip': u'172.31.45.61', u'id': u'i-0b2f186f2ea822a61', u'ebs_optimized': False, u'state': u'pending', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/xvda', u'ramdisk': None, u'block_device_mapping': {u'/dev/xvda': {u'status': u'attaching', u'delete_on_termination': True, u'volume_id': u'vol-07e905319086716c9'}}, u'key_name': u'Dawny33Ansible', u'image_id': u'ami-f173cc91', u'tenancy': u'default', u'groups': {u'sg-eda31a95': u'POC'}, u'public_dns_name': u'ec2-35-167-242-55.us-west-2.compute.amazonaws.com', u'state_code': 0, u'tags': {u'Name': u'Dawny33Template'}, u'placement': u'us-west-2b', u'ami_launch_index': u'2', u'dns_name': u'ec2-35-167-242-55.us-west-2.compute.amazonaws.com', u'region': u'us-west-2', u'launch_time': u'2017-01-31T06:25:38.000Z', u'instance_type': u't2.micro', u'architecture': u'x86_64', u'hypervisor': u'xen'})
Can someone help me understand why this is happening?
The "assign_public_ip" field is a bool value.
here
Somebody did fix this in the ansible-module-core library but change wasn't reflected.
here
The problem here was that I was launching the slave instances in a public VPC. So, the public IPs are being assigned by default.
So, if the public IPs are not wanted, then one needs to launch the instances in a private subnet of a VPC. For example, below is an example task for provisioning EC2 instances in a private subnet:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{image_instance }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
vpc_subnet_id: "{{ private_subnet_id }}"
wait: no
instance_tags:
Name: {{ template_name }}
#delete_on_termination: yes
register: ec2

Ansible: Trying to create multiple EC2 instances in multiple regions in one shot

I'm trying to create an AWS EC2 ansible playbook that:
1) first allocates one VPC each on three Regions which are:
us-west-1, ap-northeast-1 and eu-west-1.
2) Finds the latest ubuntu AMI for each region (ec2_ami_search),
3) then using the discovered results from 1) and 2),
create one EC2 instance per region with the latest ubuntu AMI (for the region)
with Availibility Zones us-west-1a, ap-northeast-1a
and eu-west-1a, respectively.
With Ansible, I had no problem with step 1) and 2) which was simply:
>
tasks:
- name: create a vpc
ec2_vpc:
state: present
region: "{{ item.region }}"
internet_gateway: True
resource_tags: { env: production}
cidr_block: 10.0.0.0/16
subnets:
- cidr: 10.0.0.0/24
az: "{{ item.az }}"
resource_tags:
env: production
tier: public
route_tables:
- subnets:
- 10.0.0.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
with_items:
- region: us-west-1
az: us-west-1a
- region: ap-northeast-1
az: ap-northeast-1a
- region: eu-west-1
az: eu-west-1a
...
- name: Get the ubuntu trusty AMI
ec2_ami_search: distro=ubuntu release=trusty virt=hvm region={{ item }}
with_items:
- us-west-1
- ap-northeast-1
- eu-west-1
register: ubuntu_image
...
>
and the outputted for the ubuntu_image variable with debug module:
TASK: [print out ubuntu images] ***********************************************
ok: [localhost] => {
"ubuntu_image": {
"changed": false,
"msg": "All items completed",
"results": [
{
"aki": null,
"ami": "ami-b33dccf7",
"ari": null,
"changed": false,
"invocation": {
"module_args": "distro=ubuntu release=trusty virt=hvm region=us-west-1",
"module_name": "ec2_ami_search"
},
"item": "us-west-1",
"serial": "20150629",
"tag": "release"
},
{
"aki": null,
"ami": "ami-9e5cff9e",
"ari": null,
"changed": false,
"invocation": {
"module_args": "distro=ubuntu release=trusty virt=hvm region=ap-northeast-1",
"module_name": "ec2_ami_search"
},
"item": "ap-northeast-1",
"serial": "20150629",
"tag": "release"
},
{
"aki": null,
"ami": "ami-7c4b0a0b",
"ari": null,
"changed": false,
"invocation": {
"module_args": "distro=ubuntu release=trusty virt=hvm region=eu-west-1",
"module_name": "ec2_ami_search"
},
"item": "eu-west-1",
"serial": "20150629",
"tag": "release"
}
]
}
}
However, I couldn't figure out how to make step 3)
take the result from the ubuntu_image register variable
and then determine which of the 3 AMIs and Subnets the given EC2 instance belonged.
See below where as a workaround I manually hardcoded the ami and subnet value
which I simply got from the printout from the above ubuntu_image printout:
- name: start the instances
ec2:
image: "{{ item.ami }}" # MANUALLY HARDCODED
region: "{{ item.region }}"
instance_type: "{{ instance_type }}"
assign_public_ip: True
key_name: "{{ item.name }}"
group: ["http deployment", "ssh deployment", "outbound deployment"]
instance_tags: { Name: "{{ item.name }}", type: ss, env: production}
exact_count: "{{ count }}"
count_tag: { Name: "{{ item.name }}" }
vpc_subnet_id: "{{ item.subnet }}" #MANUALLY HARDCODED
wait: yes
register: ec2
with_items:
- region: us-west-1
name: ss12
ami: ami-b33dccf7 # MANUALLY HARDCODED
subnet: subnet-35a22550 # MANUALLY HARDCODED
- region: ap-northeast-1
name: ss21
ami: ami-9e5cff9e # MANUALLY HARDCODED
subnet: subnet-88c47dff # MANUALLY HARDCODED
- region: eu-west-1
name: ss32
ami: ami-7c4b0a0b # MANUALLY HARDCODED
subnet: subnet-23f59554 # MANUALLY HARDCODED
While hardcoding ami/subnet works, can you think of a solution for me to avoid this hardcoding of the ami/subnet?
I tried messing with set_fact to no avail as I couldn't get it to become a dictionary of "region to ami" value mappings
Keep in mind that Ansible is a "plugable" system, so it's really easy to customize it for your self. Sometimes it's even easier and faster than trying to find a workaround by using "native" modules.
In your case you can easily write your own custom lookup_plugin that would search for a correct subnet.
For example:
Create a folder called lookup_plugins in your main folder.
Create a file (if you do not have one) called ansible.cfg
[defaults]
lookup_plugins = lookup_plugins
Create a file in lookup_plugins called subnets.py
import boto.vpc
class LookupModule(object):
def __init__(self, basedir=None, **kwargs):
self.basedir = basedir
self.plugin_name = 'subnets'
def run(self, regions, variable=None, **kwargs):
if not isinstance(regions, list):
regions = [regions]
for region in regions:
return [boto.vpc.connect_to_region(region).get_all_subnets()[0].id]
The above simple code would look for a subnet in a given region. Of course you can customize it however you want.
Then in your playbook reference this plugin to find correct subnet:
Example:
- hosts: localhost
gather_facts: no
tasks:
- name: Start instance
debug: msg="Starting instance {{ item.ami }} in {{ item.region }} in {{ item.subnet }}"
with_items:
- region: us-west-1
name: ss12
ami: ami-b33dccf7
subnet: "{{ lookup('subnets', 'us-west-1') }}"
- region: ap-northeast-1
name: ss21
ami: ami-9e5cff9e
subnet: "{{ lookup('subnets', 'ap-northeast-1') }}"
- region: eu-west-1
name: ss32
ami: ami-7c4b0a0b
subnet: "{{ lookup('subnets', 'ap-northeast-1') }}"
In your case you would probably need to reference the correct AMI and associated Region .
if you still want to do this without help from another module you can calculate the modulo '%' of the servers and the subnets length:
"{{subnets[item.0 | int % subnets | length | int].aws_ec2_subnets}}"
example code
vars:
subnets:
- {zone: "us-east-1a", aws_ec2_subnets: 'subnet-123'}
- {zone: "us-east-1b", aws_ec2_subnets: 'subnet-456'}
- {zone: "us-east-1d", aws_ec2_subnets: 'subnet-789'}
server_list:
- server1
- server2
- server3
task:
- name: Create new ec2 instance
ec2:
profile: "{{aws_profile}}"
key_name: "{{aws_key_name}}"
group_id: "{{aws_security_group}}"
instance_type: "{{aws_instance_type}}"
image: "{{aws_ami}}"
region: "{{region}}"
exact_count: "1"
#instance_profile_name: none
wait: yes
wait_timeout: 500
volumes: "{{volumes}}"
monitoring: no
vpc_subnet_id: "{{subnets[item.0 | int % subnets | length | int].aws_ec2_subnets}}"
assign_public_ip: no
tenancy: default
termination_protection: yes
instance_tags:
App: "{{app_name}}"
Environment: "{{environment_type}}"
Platform: "{{platform_name}}"
Name: "{{item.1}}"
count_tag:
App: "{{app_name}}"
Environment: "{{environment_type}}"
Platform: "{{platform_name}}"
Name: "{{item.1}}"
register: ec2_new_instance
with_indexed_items:
- "{{server_list}}"
This is how I do it.
The launchNodes.yml is just a simple ec2 with some taging
- debug:
msg: "launching {{ nodeCount }} nodes in these subnets {{ec2SubnetIds}}"
- name: clear finalSubnetList
set_fact:
finalSubnetList: []
- name: build final list
set_fact:
finalSubnetList: "{{ finalSubnetList }} + [ '{{ ec2SubnetIds[ ( ec2subnet|int % ec2SubnetIds|length)|int ] }}' ]"
with_sequence: count={{nodeCount}}
loop_control:
loop_var: ec2subnet
- debug:
msg: "finalSubnetList {{finalSubnetList}} "
- include_tasks: launchNodes.yml
ec2SubnetId="{{ finalSubnetList[index|int -1] }}"
nodeCount=1
with_sequence: count="{{ finalSubnetList|length }}"
loop_control:
loop_var: index

Resources