I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
Related
The following Ansible code creates 1 EC2 instance with 1 tag
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: first_group
instance_tags:
name: first_group
wait: yes
register: ec2
- debug:
var: ec2
What is the best way to rework this code to create 2 EC2 instances, each one with a different tag?
Mind you, this is OLD STUFF, so it may not work any more. But I defined the inventory first, in group_vars/all.yml:
env_id: EE_TEST3_SLS
env_type: EE
zones:
- zone: presentation
servers:
- type: rp
count: 1
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: application
servers:
- type: app
count: 2
instance_type: m4.xlarge
- type: sre
count: 0
instance_type: m4.xlarge
- type: httpd
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 4
mirror_groups: 2
instance_type: m4.xlarge
- type: L7
count: 1
instance_type: m4.xlarge
- zone: data
servers:
- type: batch
count: 2
instance_type: m4.xlarge
- type: terracotta_tmc
count: 1
instance_type: m4.xlarge
- type: terracotta
count: 2
mirror_groups: 1
instance_type: m4.xlarge
- type: oracle
count: 1
instance_type: m4.4xlarge
- type: marklogic
count: 1
instance_type: m4.4xlarge
- type: alfresco
count: 1
instance_type: m4.4xlarge
Then, the playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Provision a set of instances
ec2:
instance_type: "{{ item.1.instance_type }}"
image: "{{ image }}"
region: "{{ region }}"
vpc_subnet_id: "{{ subnets[item.0.zone] }}"
tenancy: "{{ tenancy }}"
group_id: "{{ group_id }}"
key_name: "{{ key_name }}"
wait: true
instance_tags:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
count_tag:
EnvID: "{{ env_id }}"
Type: "{{ item.1.type }}"
Zone: "{{ item.0.zone }}"
exact_count: "{{ item.1.count }}"
with_subelements:
- "{{ zones }}"
- servers
register: ec2_results
The idea is that you set all the tags in the inventory, and just reference them in the task.
Provided you'd simply want to create the same instances only replacing values first_group with second_group in the module parameters, the following (totally untested) example should put you on track.
If you need to go further, as already advised in comments, have a look at the ansible loop documentation
- ec2:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_access_key }}"
ec2_region: "{{ aws_region }}"
assign_public_ip: yes
instance_type: "{{ instance_type }}"
image: "{{ aws_ami }}"
keypair: ansible_key
vpc_subnet_id: "{{ aws_subnet_id }}"
group_id: "{{ aws_security_group }}"
exact_count: 1
count_tag:
name: "{{ item }}"
instance_tags:
name: "{{ item }}"
wait: yes
register: ec2
loop:
- first_group
- second_group
- debug:
var: ec2.results
I am building an AWS EC2 Instance using Ansible and want to set three environment variables to be used by the roles. The playbook is launched frmo AWX. I get the STS Assume Role values in the "Assume Credentials" task.
---
- name: Create Instance
hosts: localhost
gather_facts: false
connection: local
tasks:
- import_tasks: usshared.yml
when: (buildenv == "us-shared")
- import_tasks: ossettings.yml
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
# - name: Assume Role Data
# debug:
# var: assumed_role
- name: Find AMI Target
ec2_ami_info:
filters:
name: "cmpc*{{ osselection }}*base*"
owners: self
region: "{{ target_region }}"
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
register: found_base_ami
- import_tasks: osbuildlinux.yml
when: (osselection == "centos7") or (osselection == "rhel7")
- import_tasks: osbuildwin.yml
when: (osselection == "win2019")
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
- name: Configure New Windows Instance
hosts: new_launch_windows
gather_facts: true
roles:
- systemupdates
- generalostasks
- networkconfig
- winadjoin
- appgroup
This is the assumed_role register contents:
"assumed_role": {
"changed": true,
"sts_creds": {
"access_key": "HIDDEN",
"secret_key": "HIDDEN",
"session_token": "HIDDEN",
"expiration": "2022-06-08T16:58:38+00:00"
},
"sts_user": {
"assumed_role_id": "HIDDEN:AWXBuildServer",
"arn": "arn:aws:sts::0123456789:assumed-role/sre-ec2-role-assumed/AWXBuildServer"
}
}
Now I want to use some of the register values as environment variables for roles like so:
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
However, I am getting this error:
The field 'environment' has an invalid value, which includes an undefined variable. The error was: 'assumed_role' is undefined. 'assumed_role' is undefined
How do I correctly pass the register values to the environment?
ok you have more than one plays in same playbook with different hosts, so i suggest to use add_host after you have register the result:
- name: Assume Credentials
sts_assume_role:
region: "{{ target_region }}"
role_arn: "{{ awsarnrole }}"
role_session_name: "AWXBuildServer"
register: assumed_role
- name: add variables to dummy host
add_host:
name: "variable_holder"
shared_variable: "{{ assumed_role }}"
then in second play:
- name: Configure New Linux Instance
hosts: new_launch_linux
gather_facts: true
vars:
assumed_role: "{{ hostvars['variable_holder']['shared_variable'] }}"
roles:
- systemupdates
- generalostasks
- networkconfig
- appgroup
environment:
AWS_ACCESS_KEY: "{{ assumed_role.sts_creds.access_key }}"
AWS_SECRET_ACCESS_KEY: "{{ assumed_role.sts_creds.secret_key }}"
AWS_SECURITY_TOKEN: "{{ assumed_role.sts_creds.session_token }}"
I'm currently building Ansible image playbook templates across AWS and VSphere and I'd like be able to define multiple additional disks but only when they're defined via the variables.
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 | default(80) }}"
type: "{{ vm_disk_type0 | default(thin) }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
- size_gb: "{{ vm_disk_size3 }}"
type: "{{ vm_disk_type3 }}"
datastore: "{{ vm_disk_datastore3 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Variables:
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_datastore1: "C6200_T2_FCP_3Days"
vm_disk_size1: "50"
vm_disk_type1: "thin"
vm_disk_datastore2: "C6200_T2_FCP_3Days"
vm_disk_size2: "20"
vm_disk_type2: "thin"
vm_disk_datastore3: ""
vm_disk_size3: ""
vm_disk_type3: ""
Error:
{
"_ansible_parsed": false,
"exception": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"_ansible_no_log": false,
"module_stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 113, in <module>\n _ansiballz_main()\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 105, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1568948672.23-135785453591577/AnsiballZ_vmware_guest.py\", line 48, in invoke_module\n imp.load_module('__main__', mod, module, MOD_DESC)\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2396, in <module>\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2385, in main\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 2008, in deploy_vm\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1690, in configure_disks\n File \"/tmp/ansible_vmware_guest_payload_6vidiw/__main__.py\", line 1608, in get_configured_disk_size\nValueError: invalid literal for int() with base 10: ''\n",
"changed": false,
"module_stdout": "",
"rc": 1,
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error"
}
The idea being, if the variable isn't defined when the job is launched, it will be skipped by the vm_guest module.
Is this the best method? Is anyone able to suggest a successful way forward?
Update: Probably the best method would be to build the instance and then use vm_guest_disk to add additional disks using the method suggested below. This module is available in v2.8.
Our version of Tower is 2.7.9, so I'm going to use a more verbose method of multiple vm_guest calls via a disk_num variable:
Variables
num_disks: 0
vm_cluster: C6200_NPE_PC_ST
vm_datacenter: DC2
vm_disk_datastore0: C6200_T2_FCP_3Days
vm_disk_datastore1: ''
vm_disk_datastore2: ''
vm_disk_datastore3: ''
vm_disk_datastore4: ''
vm_disk_size0: 80
vm_disk_type0: thin
vm_disk_type1: ''
vm_disk_type2: ''
vm_disk_type3: ''
vm_disk_type4: ''
vm_domain: corp.local
vm_folder: /DC2/vm/ap-dev
vm_hostname: xyzvmserver.corp.local
vm_memory_mb: 16000
vm_network: C6200_10.110.64.0_24_VL1750
vm_num_cpus: 4
vm_servername: server05
vm_template: Windows2016_x64_AN_ESX_v1.1
Playbook:
---
- hosts: localhost
connection: local
gather_facts: False
tasks:
- name: Launch Windows 2016 VM Instance - No additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 0
- name: Launch Windows 2016 VM Instance 1 Additional Disk
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 1
- name: Launch Windows 2016 VM Instance 2 Additional Disks
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vm_disk_size0 }}"
type: "{{ vm_disk_type0 }}"
datastore: "{{ vm_disk_datastore0 }}"
- size_gb: "{{ vm_disk_size1 }}"
type: "{{ vm_disk_type1 }}"
datastore: "{{ vm_disk_datastore1 }}"
- size_gb: "{{ vm_disk_size2 }}"
type: "{{ vm_disk_type2 }}"
datastore: "{{ vm_disk_datastore2 }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
when: num_disks == 2
etc
It looks like you should build your vm with 1 disk, that you know for sure will always be there when you run the play, which you could make to be the first variable in this list:
vms:
0:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "80"
vm_disk_type: "thin"
1:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "50"
vm_disk_type: "thin"
2:
vm_disk_datastore: "C6200_T2_FCP_3Days"
vm_disk_size: "20"
vm_disk_type: "thin"
3:
vm_disk_datastore: ""
vm_disk_size: ""
vm_disk_type: ""
Since I don't know of a way that you can loop through sub module arguments to make them present or not, build your vm with the one disk you know for sure will be there. Which mentioned above, for consistency, should be the first one in your list. Or you will need to rebuild the logic to drill into the list looking for a specific value that you can key in on with jinja filters like selectattr and map.
- name: Launch Windows 2016 VM Instance
vmware_guest:
validate_certs: no
datacenter: "{{ vm_datacenter }}"
folder: "{{ vm_folder }}"
name: "{{ vm_servername }}"
state: poweredon
template: "{{ vm_template }}"
cluster: "{{ vm_cluster }}"
disk:
- size_gb: "{{ vms.0.vm_disk_size }}"
type: "{{ vms.0.vm_disk_type }}"
datastore: "{{ vms.0.vm_disk_datastore }}"
hardware:
memory_mb: "{{ vm_memory_mb | default(8192) }}"
num_cpus: "{{ vm_num_cpus | default(4) }}"
networks:
- name: "{{ vm_network }}"
start_connected: yes
vlan: "{{ vm_network }}"
device_type: vmxnet3
type: dhcp
domain: "{{ vm_domain }}"
customization:
hostname: "{{ vm_servername }}"
orgname: Redacted
password: "{{ winlocal_admin_pass }}"
timezone: 255
wait_for_ip_address: yes
register: vm
Once its built, then use this different module to drill into adding additional disks
- name: Add disks to virtual machine
vmware_guest_disk:
hostname: "{{ vm_servername }}"
datacenter: "{{ vm_datacenter }}"
disk:
- size_gb: "{{ item.vm_disk_size }}"
type: "{{ item.vm_disk_type }}"
datastore: "{{ item.vm_disk_datastore }}"
state: present
loop: "{{ vms }}"
loop_control:
label: "Disk {{ my_idx}} - {{ item.vm_disk_datastore }}"
index_var: my_idx
when:
- item.vm_disk_datastore != ""
- item.vm_disk_size != ""
- item.vm_disk_type != ""
- my_idx > 0
That will feed through a loop pulling that variable in from where ever you decided to define it, and then loop through that list of dictonaries pulling the values with the correct labels, but only doing so if they have a value (if leaving blanks in that list is something you plan on doing, if not, the when isn't even needed). This will also allow you to expand upward by just putting more items in the list without having to expand out your task. If you need to edit the first task because you can't rely on list order, but need to key into a variable, then you need to make sure you make a change to the disks on the second task as well to omit using the disk added in the first task.
Also, the documentation on this module states that existing disks called called with this module will be adjusted to that size, and can't be reduced using the module, so if you want to avoid that, put in some data checks to compare the configured disk size to what is in your variable, so you can skip the task if it is going to be reduced (or if you just don't want an existing vm to be altered).
I have two tasks that differ by one (optional) value. Depending on where the playbook is running the optional value is either present or not. At the moment I have two copies of the task executing depending on a when clause but this is error prone as the task in question has a lot of other mandatory config and the one optional entry is lost in the mix.
- name: Launch configuration for prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcprod
tags:
- asg
when:
env == "prod"
- name: Launch configuration for non-prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
spot_price: "{{ admin_bid_price }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcdev
tags:
- asg
when:
env != "prod"
- name: Set lc for stg or prod
set_fact: lc="{{ lcprod }}"
when:
env == "prod"
- name: Set lc for none stg or prod
set_fact: lc="{{ lcdev }}"
when:
env != "prod"
Ideally, this example could be collapsed down into a single task (i.e. without the subsequent set_fact task) and the spot_price entry is either present or not depending on the env.
I've tried setting spot_price to null or "" without success.
To me it seems like you want:
spot_price: "{{ (env == "prod") | ternary(omit, admin_bid_price) }}"
See: Ansible Filters - Omitting Parameters and Other Useful Filters
I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.