How to check for string when not in variable - ansible

I am trying to create a ec2 volume and attache to EC2 if device is not present.
here is my code:
vars:
region: us-east-1
ec2_instance_id: i-xxxxxxxx
ec2_volume_size: 5
ec2_volume_name: chompx
ec2_device_name: "/dev/sdf"
tasks:
- name: List volumes for an instance
ec2_instance_facts:
instance_ids: "{{ ec2_instance_id }}"
region: "{{ region }}"
register: ec2
- debug: msg="{{ ec2.instances[0].block_device_mappings | list }}"
- name: Create new volume using SSD storage
ec2_vol:
region: "{{ region }}"
instance: "{{ ec2_instance_id }}"
volume_size: "{{ ec2_volume_size }}"
volume_type: gp2
state: present
when: "ec2_device_name not in ec2"
Problem,
Create volume code is still getting executed even when /dev/sdf is present in ec2 variable.

register: ec2 creates an object with all the information about the task that was run. You are trying to operate on only the results of that command.
You probably need:
when: "ec2_device_name not in ec2.stdout_lines" # ec2.stdout_lines should be a list
But I'm not 100% sure on the output of that command. Check it by adding a debug statement:
- debug: var=ec2

Related

Launch instance in AWS and add in host file in ansible (not dynaminc inventory)

I have a lot of inventories inside ansible. I not looking for a dynamic inventory solution.
When I create an instance from ansible in AWS, I need to run some tasks inside that new server but I don’t know what is the best way to do it.
tasks:
- ec2:
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-west-2
key_name: xxxxxxxxxx
instance_type: t2.medium
image: ami-xxxxxxxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: subnet-xxxxxxxxxxxx
assign_public_ip: no
instance_tags:
Name: new-instances
count: 1
- name: Buscamos la IP de la instancia creada
ec2_instance_facts:
filters:
"tag:Name": new-instances
register: ec2_instance_info
- set_fact:
msg: "{{ ec2_instance_info | json_query('instances[*].private_ip_address') }} "
- debug: var=msg
I currently show the IP of the new instance but I need to create a new host file and insert the IP of the new instance alias as I need to run several tasks after creating it.
Any help doing this?

How to get all security groups in aws using ansible

I have an ansible playbook that is supposed to get all the security groups that are in a region.
It has mfa authentication also.
- name: Generate temporary access keys and token
sts_assume_role:
duration_seconds: 900
region: "{{ region }}"
aws_access_key: "{{user_access_key}}"
aws_secret_key: "{{user_secret_key}}"
mfa_serial_number: "{{user_mfa_serial}}"
role_arn: "{{role_arn}}"
mfa_token: "{{mfa_token}}"
role_session_name: "{{role_session_name}}"
register: assumed_role
- name: ec2 security group information fetch
ec2_group_facts:
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
filters:{}
register: result
- debug: msg="{{ result.security_groups }}"
The problem is, when I run it when I run it, I only get the default SG only. But I want all the security groups.
When I add filter to a particular vpc-id it results to empty yet that vpc exists.
What could be the issue??
Thanks
I had forgotten to include region in ec2_group_info it now looks like this.
- name: ec2 security group information fetch
ec2_group_info:
region: "{{region}}"
aws_access_key: "{{ assumed_role.sts_creds.access_key }}"
aws_secret_key: "{{ assumed_role.sts_creds.secret_key }}"
security_token: "{{ assumed_role.sts_creds.session_token }}"
register: result

Ansible: ec2_eni cannot attach interface

Using ansible I am trying to create ec2 instances and attach an extra network interface to each instance so that they will have two private IP addresses. However, for some reason, it seems that the ec2_eni module can create network interfaces, but will not attach them to the instances specified. What am I doing wrong? Below is my playbook:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Create new servers
ec2:
region: "{{ region }}"
vpc_subnet_id: "{{ subnet }}"
group_id: "{{ sec_group }}"
assign_public_ip: yes
wait: true
key_name: '{{ key }}'
instance_type: t2.micro
image: '{{ ami }}'
exact_count: '{{ count }}'
count_tag:
Name: "{{ server_name }}"
instance_tags:
Name: "{{ server_name }}"
register: ec2
- name: Show ec2 instance json data
debug:
msg: "{{ ec2['tagged_instances'] }}"
- name: allocate new elastic IPs and associate it with instances
ec2_eip:
region: "{{ region }}"
device_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
register: eips
- name: Show eip instance json data
debug:
msg: "{{ eips['results'] }}"
- ec2_eni:
subnet_id: "{{ subnet }}"
state: present
secondary_private_ip_address_count: 1
security_groups: "{{ sec_group }}"
region: "{{ region }}"
device_index: 1
description: "test-eni"
instance_id: "{{ item['id'] }}"
with_items: "{{ ec2['tagged_instances'] }}"
The strange thing is that the ec2_eni task succeeds, saying that it has attached the network interface to each instance when in reality it just creates the network interface and then does nothing with it.
As best I can tell, since attached defaults to None, but the module docs say:
Specifies if network interface should be attached or detached from instance. If ommited, attachment status won't change
then the code does what they claim and skips the attachment step.
This appears to be a bug in the documentation, which claims the default is 'yes' but is not accurate.

How to set delete on termination for boot volume in ec2 volume using ansible

It appears to be easy to set delete on termination for new volumes attached to an ec2 instance, but how do I set that on the boot volume?
Are you talking about this option to be set over root/boot volume during instance creation, if so here is the example code:
- name: Create EC2 Instance(s)
ec2:
region: "{{ vpc_region }}"
group: "{{ ec2_security_group_name }}"
keypair: "{{ ec2_key_name }}"
instance_type: "{{ ec2_instance_type }}"
image: "{{ ami_find_result.results[0].ami_id }}"
vpc_subnet_id: "{{ subnet_id }}"
assign_public_ip: yes
wait: True
wait_timeout: 600
user_data: |
#!/bin/sh
sudo apt-get install nginx -y
instance_tags:
Name: "{{ vpc_name }}-{{ SERVER_ROLE }}"
Environment: "{{ ENVIRONMENT }}"
Server_Role: "{{ SERVER_ROLE }}"
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ ec2_volume_size }}"
delete_on_termination: yes
For details, please check this link
Hope that might help you

Ansible - Use string in a variable lookup

I'm trying to make my playbooks aware of their environment; that being production or stage.
My playbook first pulls the ec2_tag for its environment successfully and set to the variable 'env'. On the ec2 provision task, I would like to use the 'env' variable in the vpc and subnet lookups, so it grabs the correct one based on its environment. Below I tried adding the 'env' variable into the lookups, but I'm getting a full string back instead of it pulling it from my variable file.
Do I need to add some logic checks within the {{}}?
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: "{{ env + '_subnet_public1' }}"
group_id: "{{ env + '_app_group' }}"
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2
If you want to do variable interpolation then you'll need to use the hostvars[inventory_hostname]['variable'] syntax.
The FAQs cover this briefly but for your use case you should be able to do something like:
- name: Provision instance
ec2:
key_name: "{{ aws_public_key }}"
instance_type: t2.medium
image: "{{ aws_ubuntu_ami }}"
wait: true
vpc_subnet_id: hostvars[inventory_hostname][env + '_subnet_public1']
group_id: hostvars[inventory_hostname][env + '_app_group']
assign_public_ip: yes
instance_tags:
Name: "{{ sys_name }}"
Environment: "{{ env }}"
region: "{{ aws_region }}"
volumes:
- device_name: /dev/sda1
volume_size: 300
delete_on_termination: true
register: ec2

Resources