Using ansible I am trying to create ec2 instances and attach an extra network interface to each instance so that they will have two private IP addresses. However, for some reason, it seems that the ec2_eni module can create network interfaces, but will not attach them to the instances specified. What am I doing wrong? Below is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create new servers
ec2:
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group_id: "{{ sec_group }}"
assign_public_ip: yes
wait: true
key_name: '{{ key }}'
instance_type: t2.micro
image: '{{ ami }}'
exact_count: '{{ count }}'
count_tag:
Name: "{{ server_name }}"
instance_tags:
Name: "{{ server_name }}"
register: ec2
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: allocate new elastic IPs and associate it with instances
ec2_eip:
region: "{{ region }}"
device_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
register: eips
- name: Show eip instance json data
debug:
msg: "{{ eips['results'] }}"
- ec2_eni:
subnet_id: "{{ subnet }}"
state: present
secondary_private_ip_address_count: 1
security_groups: "{{ sec_group }}"
region: "{{ region }}"
device_index: 1
description: "test-eni"
instance_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
The strange thing is that the ec2_eni task succeeds, saying that it has attached the network interface to each instance when in reality it just creates the network interface and then does nothing with it.
As best I can tell, since attached defaults to None, but the module docs say:
Specifies if network interface should be attached or detached from instance. If ommited, attachment status won't change
then the code does what they claim and skips the attachment step.
This appears to be a bug in the documentation, which claims the default is 'yes' but is not accurate.
Related
I've an ansible playbook that creates l3_subinterfaces on a Palo Alto firewall, the creating is based on the host_vars of the firewall.
- l3_subinterfaces:
- tag: "9"
vr_name: "vr_production"
ip: "10.0.9.2/24"
comment: "VLAN9 Subinterface"
parent_if: "ethernet1/1"
zone: "Infrastructuur"
- tag: "13"
vr_name: "vr_production"
ip: "10.0.13.2/24"
comment: "VLAN13 Subinterface"
parent_if: "ethernet1/2"
zone: "Infrastructuur"
And the playbook task which create the interfaces:
- name: Configure l3_subinterfaces
panos_l3_subinterface:
provider: "{{ panos_provider }}"
name: "{{ item.parent_if }}.{{ item.tag }}"
tag: "{{ item.tag }}"
ip: ["{{ item.ip }}"]
vr_name: "{{ item.vr_name }}"
zone_name: "{{ item.zone }}"
comment: "{{ item.comment }}"
enable_dhcp: false
with_items:
- "{{ l3_subinterfaces }}"
when: l3_subinterfaces is defined
So at this point everything is working fine. However the thing I'm trying to achieve is holding the state of the firewall in the Ansible inventory.
So for example I'm now delete the l3_subinterface with tag 13 and run the task again, it still have the l3_subinterface with tag 13 configured on the Palo Alto firewall.
I'm trying to figure out how I can delete the l3_subinterfaces which exists on the firewall, but doesn't exists in my host_vars. I think I need to compare something like te facts with the host_vars, but really have no clue how to do it.
Actually I've already found my own answer. The solution is to compare the list l3_subinterfaces against the palo alto interfaces:
- name: Get interfaces facts
panos_facts:
provider: '{{ panos_provider }}'
gather_subset: ['interfaces']
- name: Delete unused l3_subinterfaces
panos_l3_subinterface:
provider: "{{ panos_provider }}"
name: "{{ item }}"
tag: "{{ item|regex_search('\\d+$') }}"
state: "absent"
with_items:
- "{{ ansible_net_interfaces|selectattr('tag', 'defined')|map(attribute='name')|list | difference(l3_subinterfaces|map(attribute='name')|list) }}"
I wrote 2 Ansible playbooks to create and destroy a vm inside an ESXi instance.
The create task is:
- name: Clone the template
delegate_to: localhost
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
cluster: "{{ vcenter_cluster_name }}"
datacenter: "{{ vcenter_datacenter_name }}"
folder: "{{ vcenter_datacenter_folder }}"
datastore: "{{ vcenter_datastore }}"
validate_certs: False
name: "{{ inventory_hostname }}"
template: "{{ vm_template }}"
state: poweredon
wait_for_ip_address: yes
networks:
- name: "DSwitch_Dati-VM Network 869"
ip: "{{ ansible_host }}"
netmask: "{{ vm_netmask }}"
gateway: "{{ vm_gateway }}"
start_connected: yes
The delete playbook is:
- name: TMS Cleaner
hosts: all
remote_user: tms
tasks:
- name: Set powerstate of virtual machine to poweroff
delegate_to: localhost
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
name: "{{ inventory_hostname }}"
state: poweredoff
- name: Remove virtual machine from inventory
delegate_to: localhost
community.vmware.vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
folder: "{{ vcenter_datacenter_folder }}"
datastore: "{{ vcenter_datastore }}"
validate_certs: False
name: "{{ inventory_hostname }}"
delete_from_inventory: True
state: absent
The creation is correct, while deletion can correctly stop and remove the vm BUT it doeas not remove the folder from the datastore.
What should I do to have a full deletion of all files related to a vm?
If you want to have the files deleted also from datastore you need to remove the following line:
delete_from_inventory: True
The ansible documentation for this module says:
delete_from_inventory:
Choices: Whether to delete Virtual machine from inventory or delete from disk.
no | yes
Only remove that line and files will be deleted from datastore.
I am trying to create a ec2 volume and attache to EC2 if device is not present.
here is my code:
vars:
region: us-east-1
ec2_instance_id: i-xxxxxxxx
ec2_volume_size: 5
ec2_volume_name: chompx
ec2_device_name: "/dev/sdf"
tasks:
- name: List volumes for an instance
ec2_instance_facts:
instance_ids: "{{ ec2_instance_id }}"
region: "{{ region }}"
register: ec2
- debug: msg="{{ ec2.instances[0].block_device_mappings | list }}"
- name: Create new volume using SSD storage
ec2_vol:
region: "{{ region }}"
instance: "{{ ec2_instance_id }}"
volume_size: "{{ ec2_volume_size }}"
volume_type: gp2
state: present
when: "ec2_device_name not in ec2"
Problem,
Create volume code is still getting executed even when /dev/sdf is present in ec2 variable.
register: ec2 creates an object with all the information about the task that was run. You are trying to operate on only the results of that command.
You probably need:
when: "ec2_device_name not in ec2.stdout_lines" # ec2.stdout_lines should be a list
But I'm not 100% sure on the output of that command. Check it by adding a debug statement:
- debug: var=ec2
ansible 2.7.4
Below works:
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
.
.
register: ec2
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ ec2.instances }}"
But below doesn't
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
.
.
register: "{{ register }}"
- name: Add new instance to host group
add_host:
hostname: "{{ item.public_ip }}"
groupname: launched
with_items: "{{ register.instances }}"
It results in:
fatal: [localhost]: FAILED! => {"msg": "'ansible.parsing.yaml.objects.AnsibleUnicode object' has no attribute 'instances'"}
Not sure if it is related to this:
https://github.com/ansible/ansible/issues/19803
Many thanks for the replies
Registered dynamically named variables are not templatable yet in ansible.
I have 3 task in my ansible yml file as below.
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
Now i want to check the exit status of each task whether it is success or failure.
If any one of the task fails my other task should not execute.
Please advice how to write an ansible playbook
In your first task, you have register the output to ec2.
now use fail module to stop the play if the task fails.
Ex.
register: ec2
fail:
when: "ec2.rc == 1"
here rc is the return code of the command .. we are assuming 1 for fail and 0 for success.
use fail module after every task.
Let me know if it works for you ..
Register a variable in each task and then check it in the next task. See http://docs.ansible.com/ansible/playbooks_tests.html#task-results
This is already the default behavior in Ansible. If a task fails, the Playbook aborts and reports the failure. You don't need to build in any extra functionality around this.
Maybe playbook blocks and it's error handling is to help you?
Kumar
if You want to check each task output if it is success or failure do this,
---
- name: Instance provisioning
local_action:
module: ec2
region: "{{ vpc_region }}"
key_name: "{{ ec2_keypair }}"
instance_type: "{{ instance_type }}"
image: "{{ ec2_image}}"
zone: "{{ public_az }}"
volumes:
- device_name: "{{ device }}"
volume_type: "{{ instance_volumetype }}"
volume_size: "{{ volume }}"
delete_on_termination: "{{ state }}"
instance_tags:
Name: "{{ instance_name }}_{{ release_name }}_APACHE"
environment: "{{ env_type }}"
vpc_subnet_id: "{{ public_id }}"
assign_public_ip: "{{ public_ip_assign }}"
group_id: "{{ sg_apache }},{{ sg_internal }}"
wait: "{{ wait_type }}"
register: ec2
- name: adding group to inventory file
lineinfile:
dest: "/etc/ansible/hosts"
regexp: "^\\[{{ release_name }}\\]"
line: "[{{ release_name }}]"
state: present
when: ec2 | changed
register: fileoutput
- name: adding apache ip to hosts
lineinfile:
dest: "/etc/ansible/hosts"
line: "{{ item.private_ip }} name=apache dns={{ item.public_dns_name }}
with_items: ec2.instances
when: fileoutput | changed
In your code register a variable in each and every Task if The Task has Changed to True, The Followed Task will execute otherwise it will skip that Task.