Can an ansible task be treated as a dictionary - ansible

I have two tasks that differ by one (optional) value. Depending on where the playbook is running the optional value is either present or not. At the moment I have two copies of the task executing depending on a when clause but this is error prone as the task in question has a lot of other mandatory config and the one optional entry is lost in the mix.
- name: Launch configuration for prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcprod
tags:
- asg
when:
env == "prod"
- name: Launch configuration for non-prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
spot_price: "{{ admin_bid_price }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcdev
tags:
- asg
when:
env != "prod"
- name: Set lc for stg or prod
set_fact: lc="{{ lcprod }}"
when:
env == "prod"
- name: Set lc for none stg or prod
set_fact: lc="{{ lcdev }}"
when:
env != "prod"
Ideally, this example could be collapsed down into a single task (i.e. without the subsequent set_fact task) and the spot_price entry is either present or not depending on the env.
I've tried setting spot_price to null or "" without success.

To me it seems like you want:
spot_price: "{{ (env == "prod") | ternary(omit, admin_bid_price) }}"
See: Ansible Filters - Omitting Parameters and Other Useful Filters

Related

Apply when conditional to nested lists with Ansible, not just to top-level one

So I have this killer problem with Ansible + AWS EC2.
If disk type is 'io1', 'iops' should be specified, otherwise they should not be specified.
If I want to make disk type configurable, I have to apply when, but it does not work:
- name: Provision AWS Instances
ec2:
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_ami_id }}"
...
volumes:
- device_name: "{{ ec2_root_disk_name }}"
volume_type: "{{ ec2_root_disk_type }}"
volume_size: "{{ ec2_root_disk_size }}"
delete_on_termination: "{{ ec2_root_disk_delete_on_termination }}"
iops: "{{ ec2_root_disk_iops }}"
when: ec2_root_disk_type == 'io1'
- device_name: "{{ ec2_root_disk_name }}"
volume_type: "{{ ec2_root_disk_type }}"
volume_size: "{{ ec2_root_disk_size }}"
delete_on_termination: "{{ ec2_root_disk_delete_on_termination }}"
when: ec2_root_disk_type != 'io1'
But, this nested 'when' condition is not processed and drive is added twice:
"volumes": [
{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"iops": "100",
"volume_size": "30",
"volume_type": "gp2",
"when": "ec2_root_disk_type == 'io1'"
},
{
"delete_on_termination": "true",
"device_name": "/dev/sda1",
"volume_size": "30",
"volume_type": "gp2",
"when": "ec2_root_disk_type != 'io1'"
},
]
I could add 'when' to whole command, except that I may have up to 5 drives so it will mean copying it over 32 times.
Is it possible to make Ansible process 'when' on nested lists?
Is it possible to assemble this list in a variable, refer to it in this command? Documentation of Ansible is silent on non-string, structured variables. Maybe I can apply collection filtering somehow?
What's recommended course of action here? I imagine everyone will be bumping in this exact problem when they try creating volumes of configurable type.
Specify volumes in the inventory, not in the play.
For instance, the group_vars/with_iops.yml file:
volumes:
- device_name: "{{ ec2_root_disk_name }}"
volume_type: "{{ ec2_root_disk_type }}"
volume_size: "{{ ec2_root_disk_size }}"
delete_on_termination: "{{ ec2_root_disk_delete_on_termination }}"
iops: "{{ ec2_root_disk_iops }}"
The group_vars/without_iops.yml file:
volumes:
- device_name: "{{ ec2_root_disk_name }}"
volume_type: "{{ ec2_root_disk_type }}"
volume_size: "{{ ec2_root_disk_size }}"
delete_on_termination: "{{ ec2_root_disk_delete_on_termination }}"
The play:
- name: Provision AWS Instances
ec2:
instance_type: "{{ ec2_instance_type }}"
image: "{{ ec2_ami_id }}"
...
volumes: "{{ volumes }}"
You can use an inline condition such as:
- device_name: "{{ ec2_root_disk_name }}"
volume_type: "{{ ec2_root_disk_type }}"
volume_size: "{{ ec2_root_disk_size }}"
delete_on_termination: "{{ ec2_root_disk_delete_on_termination }}"
iops: "{{ ec2_root_disk_iops if ec2_root_disk_type == 'io1' }}"
I have actually ended up adding an empty custom python module to library/ and a corresponding action plugin to action_plugins/ of my playbook. In this plugin I process input arguments by forming volumes list that I need, and call ec2 with improved arguments.

Ansible - Working with block module

I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped

How to set delete on termination for boot volume in ec2 volume using ansible

It appears to be easy to set delete on termination for new volumes attached to an ec2 instance, but how do I set that on the boot volume?
Are you talking about this option to be set over root/boot volume during instance creation, if so here is the example code:
- name: Create EC2 Instance(s)
ec2:
region: "{{ vpc_region }}"
group: "{{ ec2_security_group_name }}"
keypair: "{{ ec2_key_name }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ami_find_result.results[0].ami_id }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
wait: True
wait_timeout: 600
user_data: |
#!/bin/sh
sudo apt-get install nginx -y
instance_tags:
Name: "{{ vpc_name }}-{{ SERVER_ROLE }}"
Environment: "{{ ENVIRONMENT }}"
Server_Role: "{{ SERVER_ROLE }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: yes
For details, please check this link
Hope that might help you

how to get the exit status of the each task in ansible

I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.

Ansible - Use string in a variable lookup

I'm trying to make my playbooks aware of their environment; that being production or stage.
My playbook first pulls the ec2_tag for its environment successfully and set to the variable 'env'. On the ec2 provision task, I would like to use the 'env' variable in the vpc and subnet lookups, so it grabs the correct one based on its environment. Below I tried adding the 'env' variable into the lookups, but I'm getting a full string back instead of it pulling it from my variable file.
Do I need to add some logic checks within the {{}}?
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: "{{ env + '_subnet_public1' }}"
group_id: "{{ env + '_app_group' }}"
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2
If you want to do variable interpolation then you'll need to use the hostvars[inventory_hostname]['variable'] syntax.
The FAQs cover this briefly but for your use case you should be able to do something like:
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: hostvars[inventory_hostname][env + '_subnet_public1']
group_id: hostvars[inventory_hostname][env + '_app_group']
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2

Resources