I have defined assign_public_ip: no to my ec2 creation task:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{ standard_ami }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
wait: no
#delete_on_termination: yes
instance_tags:
Name: Dawny33Template
register: ec2
Yet, the spawned instances are being assigned public IP’s:
TASK [Add new instance to host group] ******************************************
changed: [localhost] => (item={u'kernel': None, u'root_device_type': u'ebs', u'private_dns_name': u'ip-172-31-45-61.us-west-2.compute.internal', u'public_ip': u'35.167.242.55', u'private_ip': u'172.31.45.61', u'id': u'i-0b2f186f2ea822a61', u'ebs_optimized': False, u'state': u'pending', u'virtualization_type': u'hvm', u'root_device_name': u'/dev/xvda', u'ramdisk': None, u'block_device_mapping': {u'/dev/xvda': {u'status': u'attaching', u'delete_on_termination': True, u'volume_id': u'vol-07e905319086716c9'}}, u'key_name': u'Dawny33Ansible', u'image_id': u'ami-f173cc91', u'tenancy': u'default', u'groups': {u'sg-eda31a95': u'POC'}, u'public_dns_name': u'ec2-35-167-242-55.us-west-2.compute.amazonaws.com', u'state_code': 0, u'tags': {u'Name': u'Dawny33Template'}, u'placement': u'us-west-2b', u'ami_launch_index': u'2', u'dns_name': u'ec2-35-167-242-55.us-west-2.compute.amazonaws.com', u'region': u'us-west-2', u'launch_time': u'2017-01-31T06:25:38.000Z', u'instance_type': u't2.micro', u'architecture': u'x86_64', u'hypervisor': u'xen'})
Can someone help me understand why this is happening?
The "assign_public_ip" field is a bool value.
here
Somebody did fix this in the ansible-module-core library but change wasn't reflected.
here
The problem here was that I was launching the slave instances in a public VPC. So, the public IPs are being assigned by default.
So, if the public IPs are not wanted, then one needs to launch the instances in a private subnet of a VPC. For example, below is an example task for provisioning EC2 instances in a private subnet:
- name: Basic provisioning of EC2 instance
ec2:
assign_public_ip: no
aws_access_key: "{{ aws_id }}"
aws_secret_key: "{{ aws_key }}"
region: "{{ aws_region }}"
image: "{{image_instance }}"
instance_type: "{{ free_instance }}"
key_name: "{{ ssh_keyname }}"
count: 3
state: present
group_id: "{{ secgroup_id }}"
vpc_subnet_id: "{{ private_subnet_id }}"
wait: no
instance_tags:
Name: {{ template_name }}
#delete_on_termination: yes
register: ec2
Related
The following Ansible code creates 1 EC2 instance with 1 tag
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: first_group
instance_tags:
name: first_group
wait: yes
register: ec2
- debug:
var: ec2
What is the best way to rework this code to create 2 EC2 instances, each one with a different tag?
Mind you, this is OLD STUFF, so it may not work any more. But I defined the inventory first, in group_vars/all.yml:
env_id: EE_TEST3_SLS
env_type: EE
zones:
- zone: presentation
servers:
- type: rp
count: 1
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: application
servers:
- type: app
count: 2
instance_type: m4.xlarge
- type: sre
count: 0
instance_type: m4.xlarge
- type: httpd
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 4
mirror_groups: 2
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: data
servers:
- type: batch
count: 2
instance_type: m4.xlarge
- type: terracotta_tmc
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 2
mirror_groups: 1
instance_type: m4.xlarge
- type: oracle
count: 1
instance_type: m4.4xlarge
- type: marklogic
count: 1
instance_type: m4.4xlarge
- type: alfresco
count: 1
instance_type: m4.4xlarge
Then, the playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Provision a set of instances
ec2:
instance_type: "{{ item.1.instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnets[item.0.zone] }}"
tenancy: "{{ tenancy }}"
group_id: "{{ group_id }}"
key_name: "{{ key_name }}"
wait: true
instance_tags:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
count_tag:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
exact_count: "{{ item.1.count }}"
with_subelements:
- "{{ zones }}"
- servers
register: ec2_results
The idea is that you set all the tags in the inventory, and just reference them in the task.
Provided you'd simply want to create the same instances only replacing values first_group with second_group in the module parameters, the following (totally untested) example should put you on track.
If you need to go further, as already advised in comments, have a look at the ansible loop documentation
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: "{{ item }}"
instance_tags:
name: "{{ item }}"
wait: yes
register: ec2
loop:
- first_group
- second_group
- debug:
var: ec2.results
ansible 2.7.4
Below works:
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
.
.
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
But below doesn't
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
.
.
register: "{{ register }}"
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ register.instances }}"
It results in:
fatal: [localhost]: FAILED! => {"msg": "'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'instances'"}
Not sure if it is related to this:
https://github.com/ansible/ansible/issues/19803
Many thanks for the replies
Registered dynamically named variables are not templatable yet in ansible.
Using ansible I am trying to create ec2 instances and attach an extra network interface to each instance so that they will have two private IP addresses. However, for some reason, it seems that the ec2_eni module can create network interfaces, but will not attach them to the instances specified. What am I doing wrong? Below is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create new servers
ec2:
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group_id: "{{ sec_group }}"
assign_public_ip: yes
wait: true
key_name: '{{ key }}'
instance_type: t2.micro
image: '{{ ami }}'
exact_count: '{{ count }}'
count_tag:
Name: "{{ server_name }}"
instance_tags:
Name: "{{ server_name }}"
register: ec2
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: allocate new elastic IPs and associate it with instances
ec2_eip:
region: "{{ region }}"
device_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
register: eips
- name: Show eip instance json data
debug:
msg: "{{ eips['results'] }}"
- ec2_eni:
subnet_id: "{{ subnet }}"
state: present
secondary_private_ip_address_count: 1
security_groups: "{{ sec_group }}"
region: "{{ region }}"
device_index: 1
description: "test-eni"
instance_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
The strange thing is that the ec2_eni task succeeds, saying that it has attached the network interface to each instance when in reality it just creates the network interface and then does nothing with it.
As best I can tell, since attached defaults to None, but the module docs say:
Specifies if network interface should be attached or detached from instance. If ommited, attachment status won't change
then the code does what they claim and skips the attachment step.
This appears to be a bug in the documentation, which claims the default is 'yes' but is not accurate.
I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.
I'm trying to make my playbooks aware of their environment; that being production or stage.
My playbook first pulls the ec2_tag for its environment successfully and set to the variable 'env'. On the ec2 provision task, I would like to use the 'env' variable in the vpc and subnet lookups, so it grabs the correct one based on its environment. Below I tried adding the 'env' variable into the lookups, but I'm getting a full string back instead of it pulling it from my variable file.
Do I need to add some logic checks within the {{}}?
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: "{{ env + '_subnet_public1' }}"
group_id: "{{ env + '_app_group' }}"
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2
If you want to do variable interpolation then you'll need to use the hostvars[inventory_hostname]['variable'] syntax.
The FAQs cover this briefly but for your use case you should be able to do something like:
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: hostvars[inventory_hostname][env + '_subnet_public1']
group_id: hostvars[inventory_hostname][env + '_app_group']
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2