I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.
Related
I've write ansible-playbook to collect the result from many network devices. Below playbook is working fine. But if I have to collect result with lot of commands. Let say 20 commands, I've to create the many task to write the results into file in my playbook.
For now, I manually create the tasks to write to logs into file. Below is example with 3 commands.
- name: run multiple commands and evaluate the output
hosts: <<network-host>>
gather_facts: no
connection: local
vars:
datetime: "{{ lookup('pipe', 'date +%Y%m%d%H') }}"
backup_dir: "/backup/"
cli:
host: "{{ ansible_host }}"
username: <<username>>
password: <<password>>
tasks:
- sros_command:
commands:
- show version
- show system information
- show port
provider: "{{ cli }}"
register: result
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show version\n{{ result.stdout[0] }}"
create: yes
changed_when: False
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show system information\n{{ result.stdout[1] }}"
create: yes
changed_when: False
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show port\n{{ cmd_result.stdout[2] }}"
create: yes
changed_when: False
Is it possible to loop commands and result within one task?
Please kindly advice.
Thanks
try this one task alone in place above three tasks..
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show {{ item.command }}\n{{ cmd_result.stdout{{ item.outnum }} }}"
create: yes
changed_when: False
with_items:
- { command: version, outnum: [0] }
- { command: system information, outnum: [1] }
- { command: port, outnum: [2] }
Below playbook is worked for me
- name: Writing output
local_action:
module: lineinfile
dest: "{{ backup_dir }}/{{ inventory_hostname }}-{{ datetime }}.txt"
line: "{{ inventory_hostname }}:# show {{ item.command }}\n{{ item.cmdoutput}}"
create: yes
changed_when: False
with_items:
- { command: "version", cmdoutput: "{{ cmd_result.stdout[0] }}" }
- { command: "system information", cmdoutput: "{{ cmd_result.stdout[1] }}" }
- { command: "port", cmdoutput: "{{ cmd_result.stdout[2] }}" }
I am looking to loop through a list of variables. I have it looping through the of variables using with_items, however the catch is there is a list within that variables list that needs to have a different subset / number of variables that i need to iterate through as well.
I have tried different filters to include with_nested, with_subelements, and with_items. I know that they are moving towards loops as the primary driver moving forward so any solution ideally would leverage the ansible path moving forward. I am looking at having an "inner" loop or an external task that will iterate through the vlans_list and input that data as its to that point.
group Variables
vnic_templates:
- name: vNIC-A
fabric: A
mac_pool: testmac1
mtu: 1500
org_dn: org-root
redundancy_type: none
state: present
template_type: initial-template
vlans_list: ### THE PROBLEM CHILD
- name: vlan2
native: 'no'
state: present
- name: vlan3
native: 'no'
state: present
The actual task - i have issues when i have to input multiple vlans. The vnic template will have a 1 to one relationship however the vlans_list could be 1 vnic_template to many vlans.
ucs_vnic_template:
hostname: "{{ ucs_manager_hostname }}"
username: "{{ ucs_manager_username }}"
password: "{{ ucs_manager_password }}"
name: "{{ item.name }}"
fabric: "{{ item.fabric }}"
mac_pool: "{{ item.mac_pool }}"
mtu: "{{ item.mtu }}"
org_dn: "{{ item.org_dn }}"
redundancy_type: "{{ item.redundancy_type }}"
state: "{{ item.state }}"
template_type: "{{ item.template_type }}"
vlans_list:
- name: "{{ item.1.name }}"
native: "{{ item.1.native }}"
state: "{{ item.1.present }}"
# loop: "{{ vnic_templates | subelements('vlans_list') }}"
with_items:
- "{{ vnic_templates }}"
I am starting down the road of adding an include vlan_list.yml outside of this task but no familiar with out to do that.
Actual results are
The task includes an option with an undefined variable. The error was: 'item' is undefined\n\n
I need the create a single vnic template with multiple vlans defined in that list.
Another engineer i work with was able to solve the question. By the way the variables are laid out we were able to easily just change the code
Change this:
vlans_list:
- name: "{{ item.1.name }}"
native: "{{ item.1.native }}"
state: "{{ item.1.present }}"
To this:
vlans_list: "{{ item.vlans_list }}"
Full Code listed below.
- name: Add vNIC Templates
ucs_vnic_template:
hostname: "{{ ucs_manager_hostname }}"
username: "{{ ucs_manager_username }}"
password: "{{ ucs_manager_password }}"
name: "{{ item.name }}"
fabric: "{{ item.fabric }}"
mac_pool: "{{ item.mac_pool }}"
mtu: "{{ item.mtu }}"
org_dn: "{{ item.org_dn }}"
redundancy_type: "{{ item.redundancy_type }}"
state: "{{ item.state }}"
template_type: "{{ item.template_type }}"
vlans_list: "{{ item.vlans_list }}"
with_items:
- "{{ vnic_templates }}"
I'm starting out with Ansible and AWS.
I've created a playbook that launch new instance and later should attach existing volume to the new instance:
- name: launch instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ aws_vars.region }}"
wait: yes
count: 1
instance_tags: "{{ tags }}"
monitoring: yes
vpc_subnet_id: "{{ subnetid }}"
assign_public_ip: yes
register: destination
- name: Attach volumes
ec2_vol:
device_name: xvdf
instance: "{{ destination.instances[0].instance_id }}"
region: "{{ aws_vars.region }}"
tags: "{{ ec2_tags }}"
id: "{{ volume_id }}"
delete_on_termination: yes
with_items: "{{ destination }}"
So far, so good, everything works.
I would like to add a clean-up method, so in case there will be an error of any type in the later modules, I won't have any garbage instances.
I've understood that the idea here will be to use block module, however when I've tried working with block here, nothing really happens:
---
# EC2 Migrations.
- hosts: localhost,
connection: local
gather_facts: no
tasks:
- name: Vars
include_vars:
dir: files
name: aws_vars
- name: Create instance
block:
- debug:
msg: "Launcing EC2"
notify:
- launch_instance
- Cloudwatch
rescue:
- debug:
msg: "Rolling back"
notify: stop_instance
handlers:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug: msg="{{ new_ec2.instances[0].id }}"
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
You put debug task into the block. The debug module returns ok status, so:
it does not call a handler (this required changed status),
1.the rescue-section is never triggered (this requires failed status).
Thus it is expected that "nothing really happens".
You need to put your actual tasks into the block (I leave them intact, assuming they are correct):
- name: Create instance
block:
- name: launch_instance
ec2:
key_name: "{{ aws_vars.key_name }}"
group: "{{ aws_vars.security_group }}"
instance_type: "{{ aws_vars.instance_type }}"
image: "{{ aws_vars.image }}"
region: "{{ region }}"
wait: yes
count: 1
monitoring: yes
assign_public_ip: yes
register: new_ec2
- debug:
msg: "{{ new_ec2.instances[0].id }}"
- name: Cloudwatch
ec2_metric_alarm:
state: present
region: "{{ aws_vars.region }}"
name: "{{ new_ec2.id }}-High-CPU"
metric: "CPUUtilization"
namespace: "AWS/EC2"
statistic: Average
comparison: ">="
threshold: "90.0"
period: 300
evaluation_periods: 3
unit: "Percent"
description: "Instance CPU is above 90%"
dimensions: "{'InstanceId': '{{ new_ec2.instances[0].id }}' }"
alarm_actions: "{{ aws_vars.sns_arn }}"
ok_actions: "{{ aws_vars.sns_arn }}"
with_items: "{{ new_ec2 }}"
rescue:
- name: stop_instance
ec2:
instance_type: "{{ aws_vars.instance_type }}"
instance_ids: "{{ new_ec2.instances[0].id }}"
state: stopped
I have two tasks that differ by one (optional) value. Depending on where the playbook is running the optional value is either present or not. At the moment I have two copies of the task executing depending on a when clause but this is error prone as the task in question has a lot of other mandatory config and the one optional entry is lost in the mix.
- name: Launch configuration for prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcprod
tags:
- asg
when:
env == "prod"
- name: Launch configuration for non-prod
ec2_lc:
region: "{{ aws_region }}"
name: "lc-{{ env }}-{{ type }}-{{ ansible_date_time.iso8601_micro|replace(\":\", \"-\") }}"
image_id: "{{ ami_id }}"
key_name: "{{ aws_key_pair_name }}"
security_groups: "{{ sec_group_instance.group_id }}"
instance_type: "{{ instance_type }}"
spot_price: "{{ admin_bid_price }}"
instance_profile_name: "{{ env }}-{{ type }}"
volumes:
- device_name: /dev/xvda
volume_size: 20
device_type: gp2
delete_on_termination: true
register: lcdev
tags:
- asg
when:
env != "prod"
- name: Set lc for stg or prod
set_fact: lc="{{ lcprod }}"
when:
env == "prod"
- name: Set lc for none stg or prod
set_fact: lc="{{ lcdev }}"
when:
env != "prod"
Ideally, this example could be collapsed down into a single task (i.e. without the subsequent set_fact task) and the spot_price entry is either present or not depending on the env.
I've tried setting spot_price to null or "" without success.
To me it seems like you want:
spot_price: "{{ (env == "prod") | ternary(omit, admin_bid_price) }}"
See: Ansible Filters - Omitting Parameters and Other Useful Filters
I am starting multiple cloudformation stacks in a "with_items" loop in ansible like this:
- name: Create CF stack in AWS
cloudformation:
stack_name: "{{ item.name }}"
state: "present"
template: "{{ item.name }}.py.json"
template_parameters: "{{ item.template_parameters }}"
with_items: "{{ CF_TEMPLATE_ITEMS }}"
Can I somehow make ansible start this stacks in parallel?
Using asynchronous tasks in a fire-and-forget scheme (and waiting for them to finish in a separate task) should work since ansible 2.0:
- name: Create CF stack in AWS
async: 100
poll: 0
cloudformation:
stack_name: "{{ item.name }}"
state: "present"
template: "{{ item.name }}.py.json"
template_parameters: "{{ item.template_parameters }}"
with_items: "{{ CF_TEMPLATE_ITEMS }}"
register: cf_stack_async_results
- async_status:
jid: "{{item.ansible_job_id}}"
with_items: cf_stack_async_results.results
register: cf_stack_async_poll_results
until: cf_stack_async_poll_results.finished
retries: 30