I'm trying to get the aws volume id which already exist and attached to ec2 instance using ansible.
I have a lookup task using the ec2_remote_facts module that get details of the ec2 instance including the volume id details
the task:
- name: lookup ec2 virtual machines
ec2_remote_facts:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
filters:
instance-state-name: running
"tag:Name": "{{server_name}}"
"tag:Environment": "{{environment_type}}"
"tag:App": "{{app_name}}"
"tag:Role": "{{role_type}}"
register: ec2_info
example output:
"msg": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"block_device_mapping": [
{
"attach_time": "2017-01-12T17:24:17.000Z",
"delete_on_termination": true,
"device_name": "/dev/sda1",
"status": "attached",
"volume_id": "vol-123456789"
}
],
"client_token": "",
"ebs_optimized": false,
"groups": [
{
"id": "sg-123456789",
"name": "BE-VPC"
}
],
..... and more
now I need to get only the block device mapping -> volume_id but I don't know how to get only the ID
I have tried several tasks that didn't work to get only the volume id like:
- debug: msg="{{item | map(attribute='block_device_mapping') | map('regex_search','volume_id') | select('string') | list }}"
with_items: "{{ec2_info.instances | from_json}}"
this didn't work as well:
- name: get associated vols
ec2_vol:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
instance: "{{ ec2_info.isntances.id }}"
state: list
region: "{{ region }}"
register: ec2_vol_lookup
- name: tag the volumes
ec2_tag:
aws_access_key: "{{aws_access_key}}"
aws_secret_key: "{{aws_secret_key}}"
region: "{{ec2_region}}"
resource: "{{ item.id }}"
region: "{{ region }}"
tags:
Environment: "{{environment_type}}"
Groups: "{{group_name}}"
Name: "vol_{{server_name}}"
Role: "{{role_type}}"
with_items: "{{ ec2_vol_lookup.volumes | default([]) }}"
any idea?
ansible version: 2.2.0.0
If you have single instance and single block device, use:
- debug: msg="{{ ec2_info.instances[0].block_device_mapping[0].volume_id }}"
If you have many instances and many block devices:
- debug: msg="{{ item.block_device_mapping | map(attribute='volume_id') | list }}"
with_items: "{{ ec2_info.instances }}"
In case of single instance and many devices:
- debug: msg="{{ ec2_info.instances[0].block_device_mapping | map(attribute='volume_id') | list }}"
Here is what I had to do to get volume_id (ansible 2.9.X):
- name: gather ec2 remote facts
ec2_instance_info:
filters:
vpc-id: "{{ vpc }}"
subnet-id: "{{subnet}}"
"tag:Name": "my_instance_name"
region: "{{ region }}"
register: ec2_instance_info
- debug:
msg: "index: {{item.1.tags.Name}} volume info: {{item.1.block_device_mappings[0].ebs.volume_id}}"
with_indexed_items:
- "{{ ec2_instance_info.instances }}"
If you can identify the volume by name, or another tag you set on it, you can use the ec2_vol_facts module:
ec2_vol_facts:
region: <your-ec2-region>
filters:
"tag:Name": <your-volume-name>
"tag:Role": <e.g. db-data>
register: ec2_vol
debug:
msg: "Ids: {{ ec2_vol.volumes | map(attribute='id') | list | to_nice_json }}"
To tag all volumes attached to AWS ec2 instances, I've used the following :
- name: Get instance ec2 facts
ec2_remote_facts:
region: "{{ aws_region }}"
filters:
"private_ip_address": "{{ inventory_hostname }}"
register: ec2_info
- name: Tag volumes
ec2_tag:
region: "{{ aws_region }}"
resource: "{{ item.volume_id }}"
state: present
tags:
Name: "{{ ec2_info.instances[0].tags.Name }} - {{ item.device_name}}"
with_items:
- "{{ ec2_info.instances[0].block_device_mapping }}"
The volume tag will be composed of the instance tag Name and the device name (ie. /dev/sdf)
- name: Get instances list
ec2_instance_facts:
filters:
"tag:instance": "{{ instance_inventory | map(attribute='instance_id') | list }}"
^ there could be any method, with which you could get instance_id
region: us-east-1
register: ec2_sets
- name: Delete instance' volume(s)
ec2_vol:
id: '{{ item }}'
state: absent
loop: "{{ ec2_sets.instances | sum(attribute='block_device_mappings', start=[]) | map(attribute='ebs.volume_id') | list }}"
when: ( ec2_sets.instances | sum(attribute='block_device_mappings', start=[]) | map(attribute='ebs.volume_id') | list | length )
^ when condition for testing task idempotent
Related
I have a text file that needs to be parsed so i can use it via a REST API with ansible.
10.0.0.0/16 Building-A
10.1.0.0/16 Building-A
10.2.0.0/16 Building-B
10.3.0.0/16 Building-B
I need to convert this text to something like this:
{
"parsed":[
{
"Building-A":[
"10.0.0.0/16",
"10.1.0.0/16"
]
},
{
"Building-B":[
"10.2.0.0/16",
"10.3.0.0/16"
]
}
]
}
Currently i run the following playbook to test the textparsing, without success. The list created is not unique.
- name: Test
hosts: localhost
tasks:
- name: Combine
ansible.builtin.set_fact:
parsed: "{{ (parsed | default([])) | union( [{item.split()[1]: item.split()[0] }] ) }}"
loop: "{{ lookup('file','hostgroups.txt').strip().splitlines() }}"
- name: Debug
ansible.builtin.debug:
var: parsed
ok: [localhost] => {
"parsed": [
{
"Building-A": "10.0.0.0/16"
},
{
"Building-A": "10.1.0.0/16"
},
{
"Building-B": "10.2.0.0/16"
},
{
"Building-B": "10.3.0.0/16"
}
]
}
Thanks for the tip with the helper variables and the groupby filter. Here is the final playbook:
- name: Test
hosts: localhost
# strategy: free
tasks:
- name: Get List
ansible.builtin.set_fact:
parsed_list: "{{ parsed_list | default([]) + [_item] }}"
loop: "{{ lookup('file','hostgroups.txt').strip().splitlines() }}"
vars:
_list: "{{ item.split() }}"
_item: "{{ {'Name': _list[1], 'Subnet': _list[0] } }}"
- name: Debug parsed_list
ansible.builtin.debug:
var: parsed_list
- name: Group parsed_list
ansible.builtin.set_fact.set_fact:
parsed_group: "{{ parsed_group | default([]) + [{_key: _value}] }}"
loop: "{{ parsed_list | groupby('Name') }}"
vars:
_key: "{{ item.0 }}"
_value: "{{ item.1 | map(attribute='Subnet') | list }}"
- name: Debug parsed_group
ansible.builtin.debug:
var: parsed_group
Parse the content, e.g.
- set_fact:
parsed_list: "{{ parsed_list|d([]) + [_item] }}"
loop: "{{ lookup('file','hostgroups.txt').splitlines() }}"
vars:
_array: "{{ item.split() }}"
_item: "{{ {'building': _array.1, 'ip': _array.0} }}"
gives
parsed_list:
- building: Building-A
ip: 10.0.0.0/16
- building: Building-A
ip: 10.1.0.0/16
- building: Building-B
ip: 10.2.0.0/16
- building: Building-B
ip: 10.3.0.0/16
Then, use filter groupby and create the list
- set_fact:
parsed: "{{ parsed|d([]) + [{_key: _val}] }}"
loop: "{{ parsed_list|groupby('building') }}"
vars:
_key: "{{ item.0 }}"
_val: "{{ item.1|map(attribute='ip')|list }}"
gives
parsed:
- Building-A:
- 10.0.0.0/16
- 10.1.0.0/16
- Building-B:
- 10.2.0.0/16
- 10.3.0.0/16
In some cases, a dictionary might be a better structure, e.g.
- set_fact:
parsed_dict: "{{ parsed_dict|d({})|combine({_key: _val}) }}"
loop: "{{ parsed_list|groupby('building') }}"
vars:
_key: "{{ item.0 }}"
_val: "{{ item.1|map(attribute='ip')|list }}"
gives
parsed_dict:
Building-A:
- 10.0.0.0/16
- 10.1.0.0/16
Building-B:
- 10.2.0.0/16
- 10.3.0.0/16
In ansible, the vmware_guest_info module will give us a list of the tags on a vm, but does not include any information about those tags:
"tags": [
"10.16.3",
"dicky",
"develop"
],
The vmware_tag_info module gives us a dict withinfo on those tags, including description and Id, but NOT the tags name:
"10.16.3": {
"tag_category_id": "urn:vmomi:InventoryServiceCategory:6eb9d643-8fa3-42a1-8b50-78a1c6e99867:GLOBAL",
"tag_description": "10.16.3",
"tag_id": "urn:vmomi:InventoryServiceTag:ca46ab80-be91-4c3a-8f9f-019d163dd954:GLOBAL",
"tag_used_by": []
},
The vmware_category_info module gives us a list that includes ID and name of a tag.
"tag_category_info": [
{
"category_associable_types": [],
"category_cardinality": "SINGLE",
"category_description": "nodeVersion",
"category_id": "urn:vmomi:InventoryServiceCategory:6eb9d643-8fa3-42a1-8b50-78a1c6e99867:GLOBAL",
"category_name": "nodeVersion",
"category_used_by": []
},
]
So it seems I need to combine the output of three different lists to get the tag value, tag name and tag ID.
I really hope that someone has already done this. If not, can anyone shed some light on how to iterate over the output of vmware_tag_info and vmware_category_info, and find when tag_category_id matches category_id?
How about this?
---
- name: Example playbook
hosts: localhost
gather_facts: no
vars:
vcenter_hostname: change me
vcenter_username: administrator#vsphere.local
vcenter_password: change me
dc1: change me
vm_name: change me
tasks:
- name: Gather tags information
vmware_tag_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
register: tags_info_result
- name: Gather tags category information
vmware_category_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
register: tags_category_info_result
- name: Set tags_information variable
set_fact:
tags_information: >-
{{ tags_information | default({})
| combine({
item.key: tags_category_info_result.tag_category_info
| selectattr('category_id', '==', item.value.tag_category_id)
| list
| first
| combine(item.value)
| combine({'tag_name': item.key})
})
}}
with_dict: "{{ tags_info_result.tag_facts }}"
- name: Gather VM information
vmware_guest_info:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ dc1 }}"
name: "{{ vm_name }}"
tags: true
register: vm_info_result
- name: Display VM tags information
debug:
msg:
- "tags information about {{ vm_name }}"
- "{{ tags_information[item] }}"
loop: "{{ vm_info_result.instance.tags }}"
when:
- "'tags' in vm_info_result.instance"
I use ansible for create and run lxc containers in proxmox.
Run container task:
- name: "DHCP IP"
proxmox:
...
hostname: "{{ item }}"
...
pubkey: "{{ pubkey }}"
with_items:
- "{{ (servers_name_suggested | union(servers_name_list)) | unique }}"
register: output_dhcp
when: not static_ip
- set_fact:
vmid: "{{ output_dhcp.results[0].msg | regex_search('[0-9][0-9][0-9]') }}"
- name: "Start container {{ vmid }}"
proxmox:
vmid: "{{ vmid }}"
api_user: root#pam
api_password: "{{ api_password }}"
api_host: "{{ api_host }}"
state: started
when: start_lxc
It's work if launched one container, one item in task "DHCP IP". If I set
two or more item, my task started only first container. Because I am setting
output_dhcp.results[0].msg
How I can take information about all containers, as example, if I will create tree containers:
output_dhcp.results[1].msg
output_dhcp.results[2].msg
and received to
- name: "Start container {{ vmid }}"
proxmox:
vmid: "{{ vmid }}"
for run all my new cotainers.
The output_dhcp.results is a list, if you extract only the first item with a [0] you will only have the first item.
You need to transform the list into another list you can iterate over in the "Start container" task:
- set_fact:
vmids: "{{ output_dhcp.results | map(attribute='msg') | map('regex_search', '[0-9][0-9][0-9]') | list }}"
- name: "Start container {{ item }}"
proxmox:
vmid: "{{ item }}"
api_user: root#pam
api_password: "{{ api_password }}"
api_host: "{{ api_host }}"
state: started
with_items: "{{ vmids }}"
when: start_lxc
To explain the transformation part:
output_dhcp.results | map(attribute='msg') => get the msg attribute of each item of the output_dhcp.results list (http://jinja.pocoo.org/docs/dev/templates/#map)
| map('regex_search', '[0-9][0-9][0-9]') => apply the regex_search on each item of the list
| list => transform the generator into a list
I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
So I'm working on some ansible playbooks to manage EC2 instances, and here is one of them:
---
- name: Create linux EC2 node
hosts: localhost
connection: local
gather_facts: False
roles:
- { role: ec2, ec2_volume_size : 500 ,
ec2_ansible_tag: { ansible_type: linux-node },
ec2_instance_type: "m4.large"
}
- { role: ec2_eip, ec2_tag: tag_ansible_type_linux_node }
The ec2 role is as follows:
- name: Create instance
local_action:
module: ec2
region: "{{ ec2.region }}"
key_name: "{{ ec2_key_name | default(ec2.credentials) }}"
instance_tags: "{{ ec2_ansible_tag | combine( ec2.tags ) }}"
image: "{{ ec2_image | default(ec2.image) }}"
instance_type: "{{ ec2_instance_type | default('t2.small')}}"
instance_profile_name: "{{ ec2.role | default('') }}"
ebs_optimized: "{{ ec2.ebs | default(false) }}"
group: "{{ ec2_groups | default(ec2.group_name) }}"
vpc_subnet_id: "{{ ec2_subnet | default(ec2.subnet) }}"
assign_public_ip: "{{ ec2.public_ip | default('yes') }}"
private_ip: "{{ ec2.private_ip | default('') }}"
monitoring: "{{ ec2_cloudwatch | default('no')}}"
wait: yes
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_volume_size | default(100) }}"
register: ec2inst
- name: register data
set_fact:
ec2_create_data: "{{ec2inst}}"
The issue is that it starts them in terminated state, which only started to happen very recently.
The output is :
"msg": "wait for instances running timeout on (Timeout Time)"
What can lead to this happening?
Answer from the comments:
Worth a look: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_InstanceStraightToTerminated.html
You're totally right it's "State transition reason message Client.VolumeLimitExceeded: Volume limit exceeded"