So I'm working on some ansible playbooks to manage EC2 instances, and here is one of them:
---
- name: Create linux EC2 node
hosts: localhost
connection: local
gather_facts: False
roles:
- { role: ec2, ec2_volume_size : 500 ,
ec2_ansible_tag: { ansible_type: linux-node },
ec2_instance_type: "m4.large"
}
- { role: ec2_eip, ec2_tag: tag_ansible_type_linux_node }
The ec2 role is as follows:
- name: Create instance
local_action:
module: ec2
region: "{{ ec2.region }}"
key_name: "{{ ec2_key_name | default(ec2.credentials) }}"
instance_tags: "{{ ec2_ansible_tag | combine( ec2.tags ) }}"
image: "{{ ec2_image | default(ec2.image) }}"
instance_type: "{{ ec2_instance_type | default('t2.small')}}"
instance_profile_name: "{{ ec2.role | default('') }}"
ebs_optimized: "{{ ec2.ebs | default(false) }}"
group: "{{ ec2_groups | default(ec2.group_name) }}"
vpc_subnet_id: "{{ ec2_subnet | default(ec2.subnet) }}"
assign_public_ip: "{{ ec2.public_ip | default('yes') }}"
private_ip: "{{ ec2.private_ip | default('') }}"
monitoring: "{{ ec2_cloudwatch | default('no')}}"
wait: yes
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_volume_size | default(100) }}"
register: ec2inst
- name: register data
set_fact:
ec2_create_data: "{{ec2inst}}"
The issue is that it starts them in terminated state, which only started to happen very recently.
The output is :
"msg": "wait for instances running timeout on (Timeout Time)"
What can lead to this happening?
Answer from the comments:
Worth a look: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_InstanceStraightToTerminated.html
You're totally right it's "State transition reason message Client.VolumeLimitExceeded: Volume limit exceeded"
Related
I am trying to create a windows server machine with the vmware_guest module from a template. I see that it ignore the parameter disk and its option size_gb. When the playbook create the virtual machine, it has the same disk size of the template This only happens when I create a windows server machine, if I try to create a linux server, the module it works correctly.
This is my playbook
- name: Clone VM from template with static IP
vmware_guest:
validate_certs: "{{ validate_certs | default('False') }}"
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ vm_datacenter }}"
name: "{{ vm_name }}"
folder: "{{ vm_folder }}"
template: "{{ vm_template }}"
state: poweredon
annotation: "{{ vm_notes | default('Provisioned by ansible') }}"
cluster: "{{ vm_cluster }}"
hardware:
num_cpus: "{{ cpu }}"
memory_mb: "{{ mem_mb }}"
hotadd_cpu: "{{ hot_add_cpu | default('True') }}"
hotremove_cpu: "{{ hot_remove_cpu | default('True') }}"
hotadd_memory: "{{ hot_add_memory | default('True') }}"
disk:
- size_gb: "{{ disk_size | default('16') }}"
type: "{{ vm_disk_type | default('thin') }}"
datastore: "{{ vm_datastore }}"
networks:
- name: "{{ vm_port_group }}"
type: static
ip: "{{ vm_ip }}"
netmask: "{{ netmask }}"
gateway: "{{ network_gateway }}"
wait_for_ip_address: yes
customization:
dns_servers:
- "{{ dns_server1 }}"
register: static_vm
I would try limiting your indentation. See if that helps.
disk:
- size_gb: "{{ disk_size | default('16') }}"
type: "{{ vm_disk_type | default('thin') }}"
datastore: "{{ vm_datastore }}"
I'm wondering if it is possible to perform a loop in the hostvars folder when using Ansible?
Here is what I've tried but haven't had success in making it work - or is it just not possible to do?
---
list_pool: 'list ltm pool {{ items }}'
with_items:
- 'abc123'
- 'def456'
I would use the "list_pool" variable in a playbook afterward:
- name: List pool
bigip_command:
server: "{{ some_server }}"
user: "{{ some_user }}"
password: "{{ some_password }}"
commands:
- "{{ list_pool }}"
validate_certs: no
delegate_to: localhost
Not sure what you mean when you say you want to loop over hostvars folder.
From what I can interpret from your tasks is: "You need to execute big-ip command list ltm <pool-name> for multiple pools in the list list_pool"
If that's what you're after, this should work:
- name: Set list_pool fact
set_fact:
list_pool: "{{ list_pool | default([]) + [item] }}"
with_items:
- 'abc123'
- 'def456'
- name: List pool
bigip_command:
server: "{{ some_server }}"
user: "{{ some_user }}"
password: "{{ some_password }}"
commands:
- "list ltm {{ item }}"
validate_certs: no
delegate_to: localhost
with_items: "{{ list_pool }}"
I got this working with the following solution:
hostvars file would look like this:
---
pre_checks:
checks:
pool:
- name: "pool_123"
- name: "pool_456"
...
And the play would look like this:
--output truncated---
- name: Fetch device host_vars
set_fact:
device_config: "{{ ((lookup('file','{{playbook_dir}}/host_vars/{{inventory_hostname}}.yml')) | from_yaml) }}"
- name: Check pool
bigip_command:
server: "{{ inventory_hostname }}"
user: "{{ remote_username }}"
password: "{{ remote_passwd }}"
commands:
- "list ltm pool {{ item }}"
validate_certs: no
with_items:
- "{{ device_config.pre_checks | json_query('checks.pool[*].name') }}"
delegate_to: localhost
when: "'active' in Active_LTM['stdout'][0]"
register: Pre_Checks
- name: Verify pool
debug: var=item
with_items: "{{ Pre_Checks.results | map(attribute='stdout_lines') | list }}"
I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
I'm trying to get the aws volume id which already exist and attached to ec2 instance using ansible.
I have a lookup task using the ec2_remote_facts module that get details of the ec2 instance including the volume id details
the task:
- name: lookup ec2 virtual machines
ec2_remote_facts:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
filters:
instance-state-name: running
"tag:Name": "{{server_name}}"
"tag:Environment": "{{environment_type}}"
"tag:App": "{{app_name}}"
"tag:Role": "{{role_type}}"
register: ec2_info
example output:
"msg": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": [
{
"attach_time": "2017-01-12T17:24:17.000Z",
"delete_on_termination": true,
"device_name": "/dev/sda1",
"status": "attached",
"volume_id": "vol-123456789"
}
],
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-123456789",
"name": "BE-VPC"
}
],
..... and more
now I need to get only the block device mapping -> volume_id but I don't know how to get only the ID
I have tried several tasks that didn't work to get only the volume id like:
- debug: msg="{{item | map(attribute='block_device_mapping') | map('regex_search','volume_id') | select('string') | list }}"
with_items: "{{ec2_info.instances | from_json}}"
this didn't work as well:
- name: get associated vols
ec2_vol:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
instance: "{{ ec2_info.isntances.id }}"
state: list
region: "{{ region }}"
register: ec2_vol_lookup
- name: tag the volumes
ec2_tag:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
resource: "{{ item.id }}"
region: "{{ region }}"
tags:
Environment: "{{environment_type}}"
Groups: "{{group_name}}"
Name: "vol_{{server_name}}"
Role: "{{role_type}}"
with_items: "{{ ec2_vol_lookup.volumes | default([]) }}"
any idea?
ansible version: 2.2.0.0
If you have single instance and single block device, use:
- debug: msg="{{ ec2_info.instances[0].block_device_mapping[0].volume_id }}"
If you have many instances and many block devices:
- debug: msg="{{ item.block_device_mapping | map(attribute='volume_id') | list }}"
with_items: "{{ ec2_info.instances }}"
In case of single instance and many devices:
- debug: msg="{{ ec2_info.instances[0].block_device_mapping | map(attribute='volume_id') | list }}"
Here is what I had to do to get volume_id (ansible 2.9.X):
- name: gather ec2 remote facts
ec2_instance_info:
filters:
vpc-id: "{{ vpc }}"
subnet-id: "{{subnet}}"
"tag:Name": "my_instance_name"
region: "{{ region }}"
register: ec2_instance_info
- debug:
msg: "index: {{item.1.tags.Name}} volume info: {{item.1.block_device_mappings[0].ebs.volume_id}}"
with_indexed_items:
- "{{ ec2_instance_info.instances }}"
If you can identify the volume by name, or another tag you set on it, you can use the ec2_vol_facts module:
ec2_vol_facts:
region: <your-ec2-region>
filters:
"tag:Name": <your-volume-name>
"tag:Role": <e.g. db-data>
register: ec2_vol
debug:
msg: "Ids: {{ ec2_vol.volumes | map(attribute='id') | list | to_nice_json }}"
To tag all volumes attached to AWS ec2 instances, I've used the following :
- name: Get instance ec2 facts
ec2_remote_facts:
region: "{{ aws_region }}"
filters:
"private_ip_address": "{{ inventory_hostname }}"
register: ec2_info
- name: Tag volumes
ec2_tag:
region: "{{ aws_region }}"
resource: "{{ item.volume_id }}"
state: present
tags:
Name: "{{ ec2_info.instances[0].tags.Name }} - {{ item.device_name}}"
with_items:
- "{{ ec2_info.instances[0].block_device_mapping }}"
The volume tag will be composed of the instance tag Name and the device name (ie. /dev/sdf)
- name: Get instances list
ec2_instance_facts:
filters:
"tag:instance": "{{ instance_inventory | map(attribute='instance_id') | list }}"
^ there could be any method, with which you could get instance_id
region: us-east-1
register: ec2_sets
- name: Delete instance' volume(s)
ec2_vol:
id: '{{ item }}'
state: absent
loop: "{{ ec2_sets.instances | sum(attribute='block_device_mappings', start=[]) | map(attribute='ebs.volume_id') | list }}"
when: ( ec2_sets.instances | sum(attribute='block_device_mappings', start=[]) | map(attribute='ebs.volume_id') | list | length )
^ when condition for testing task idempotent
I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.