Connect EC2 instance to Target Group using Ansible - ansible

I have been working for a while registering EC2 instances to ELB's using Ansible. But now I'm starting to use ALB's and I need to connect my instances to Target Groups which in turn are connected to the ALB. Is there an Ansible plugin that allows me to register an instance to a AWS Target Group?

Since Ansible does not support registration of instances to target groups I had to use the AWS-CLI tool. With the following command you can register an instance to a target group:
aws elbv2 register-targets --target-group-arn arn:aws:elasticloadbalancing:us-east-1:your-target-group --targets Id=i-your-instance
So I just call this command from Ansible and it's done.

Use elb_target:
name: Gather facts for all new proxy instances
ec2_instance_facts:
filters:
"tag:Name": "{{ ec2_tag_proxy }}"
register: ec2_proxy
elb_target_group:
name: uat-target-proxy
protocol: http
port: 80
vpc_id: vpc-4e6e8112
deregistration_delay_timeout: 60
stickiness_enabled: True
stickiness_lb_cookie_duration: 86400
health_check_path: /
successful_response_codes: "200"
health_check_interval: "20"
state: present
elb_target:
target_group_name: uat-target-proxy
target_id: "{{ item.instance_id }}"
target_port: 80
state: present
with_items: "{{ ec2_proxy.instances }}"
when: ec2_proxy.instances|length > 0

Try the following configurations:
- name: creating target group
local_action:
module: elb_target_group
region: us-east-2
vpc_id: yourvpcid
name: create-targetgrp-delete
protocol: https
port: 443
health_check_path: /
successful_response_codes: "200,250-260"
state: present
targets:
- Id: ec2isntanceid
Port: 443
wait_timeout: 200
register: tgp

Related

Launch instance in AWS and add in host file in ansible (not dynaminc inventory)

I have a lot of inventories inside ansible. I not looking for a dynamic inventory solution.
When I create an instance from ansible in AWS, I need to run some tasks inside that new server but I don’t know what is the best way to do it.
tasks:
- ec2:
aws_secret_key: "{{ ec2_secret_key }}"
aws_access_key: "{{ ec2_access_key }}"
region: us-west-2
key_name: xxxxxxxxxx
instance_type: t2.medium
image: ami-xxxxxxxxxxxxxxxxxx
wait: yes
wait_timeout: 500
volumes:
- device_name: /dev/xvda
volume_type: gp3
volume_size: 20
delete_on_termination: yes
vpc_subnet_id: subnet-xxxxxxxxxxxx
assign_public_ip: no
instance_tags:
Name: new-instances
count: 1
- name: Buscamos la IP de la instancia creada
ec2_instance_facts:
filters:
"tag:Name": new-instances
register: ec2_instance_info
- set_fact:
msg: "{{ ec2_instance_info | json_query('instances[*].private_ip_address') }} "
- debug: var=msg
I currently show the IP of the new instance but I need to create a new host file and insert the IP of the new instance alias as I need to run several tasks after creating it.
Any help doing this?

Specifying a subnetwork for Ansible google.cloud.compute_instance

I have tried every combination I can conceive of to specify to deploy a google compute instance into a particular subnet (subnetX) in network (networkY).
- name: create a instance
google.cloud.gcp_compute_instance:
name: test_object
machine_type: n1-standard-1
disks:
- auto_delete: 'true'
boot: 'true'
source: "{{ disk }}"
- auto_delete: 'true'
interface: NVME
type: SCRATCH
initialize_params:
disk_type: local-ssd
labels:
environment: production
network_interfaces: # <<< does not work. API request is made without a network_interface
- network:
selfLink: "https://blah/blah/blah/networkY"
subnetwork:
selfLink: "https://blah/blah/blah/subnetworkX"
zone: us-central1-a
project: test_project
auth_kind: serviceaccount
service_account_file: "/tmp/auth.pem"
state: present
When using subnetwork you should not specify a network.
To be more precise this is the workaround for this problem.
---
- name: create a network
gcp_compute_network:
name: ansible-network
auto_create_subnetworks: yes
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: network
- name: Get Network URL
set_fact:
network_url: "{{ network | json_query(jmesquery) }}"
vars:
jmesquery: "{selfLink: selfLink}"
- name: create a firewall
gcp_compute_firewall:
name: ansible-firewall
network: "{{ network_url }}"
allowed:
- ip_protocol: tcp
ports: ['80','22']
target_tags:
- apache-http-server
source_ranges: ['0.0.0.0/0']
project: "{{ lookup('env','GCP_PROJECT') }}"
state: present
register: firewall

Attach boot disk if exist to Gcloud instance with Ansible

I'm creating instance in Google Cloud with Ansible, but when I want to attach existing disk to new compute engine, I can't attach it or add it to instance.
- name: Launch instances
gce:
instance_names: mongo
machine_type: "n1-standard-1"
image: "debian-9"
service_account_email: "xxxx#xxxx.iam.gserviceaccount.com"
credentials_file: "gcp-credentials.json"
project_id: "learning"
disk_size: 10
disk_auto_delete: false
preemptible: true
tags: "mongo-server"
register: gce
- name: Wait for SSH for instances
wait_for:
delay: 1
host: "{{ item.public_ip }}"
port: 22
state: started
timeout: 30
with_items: "{{ gce.instance_data }}"
The error I have is:
The error was: libcloud.common.google.ResourceExistsError: {'domain': 'global', 'message': "The resource 'projects/xxx-xxx/zones/us-central1-a/disks/mongo' already exists", 'reason': 'alreadyExists'}
There are any form to configure this option with Ansible? To do that now I'm using external scripts.
Existing disks can be provided as a list under 'disks' attribute, first entry needs to be Boot dik
https://docs.ansible.com/ansible/2.6/modules/gce_module.html
- gce:
instance_names: my-test-instance
zone: us-central1-a
machine_type: n1-standard-1
state: present
metadata: '{"db":"postgres", "group":"qa", "id":500}'
tags:
- http-server
- my-other-tag
disks:
- name: disk-2
mode: READ_WRITE
- name: disk-3
mode: READ_ONLY

Ansible - Write variable to config file

We have some redis configurations that only differs on port and maxmemory settings, so I'm looking for a way to write a 'base' config file for redis and then replace the port and maxmemory variables.
Can I do that with Ansible?
For such operations usually lineinfile module works best; for example:
- name: Ensure maxmemory is set to 2 MB
lineinfile:
dest: /path/to/redis.conf
regexp: maxmemory
line: maxmemory 2mb
Or change multiple lines in one task with with_items:
- name: Ensure Redis parameters are configured
lineinfile:
dest: /path/to/redis.conf
regexp: "{{ item.line_to_match }}"
line: "{{ item.line_to_configure }}"
with_items:
- { line_to_match: "line_to_match", line_to_configure: "maxmemory 2mb" }
- { line_to_match: "port", line_to_configure: "port 4096" }
Or if you want to create a base config, write it in Jinja2 and use a template module:
vars:
redis_maxmemory: 2mb
redis_port: 4096
tasks:
- name: Ensure Redis is configured
template:
src: redis.conf.j2
dest: /path/to/redis.conf
with redis.conf.j2 containing:
maxmemory {{ redis_maxmemory }}
port {{ redis_port }}
The best way i've found to do this (and i use the same technique everywhere)
is to create a role redis with a default vars file and then override the vars when you call the role.
So in roles/redis/default/main.yml :
redis_bind: 127.0.0.1
redis_memory: 2GB
redis_port: 1337
And in your playbook :
- name: Provision redis node
hosts: redis1
roles:
- redis:
redis_port: 9999
redis_memory: 4GB
- name: Provision redis node
hosts: redis2
roles:
- redis:
redis_port: 8888
redis_memory: 8GB

Cannot delete vpc of AWS using ansible

Have created a vpc with a subnet. Deleted the instance in that subnet. So, the subnet and vpc doesn't contain any more instances(dependencies).
Further, the vpc can be deleted from the console. But when attempted through ec2_vpc module, gives the error stating "dependencies are present under this vpc so cannot be deleted". But it can see deleted from the console.
So thought that subnet under this vpc may be the dependency. The ansible documentation provides module for subnet but when used it gives the module doesn't exist stating "illegal parameter"
The image showing the route table which cannot be deleted by snippet given by olle
[IMG]http://i59.tinypic.com/2d7xwxt.png[/IMG]
I am using this code to create a VPC with ansible
- name: create a VPC
local_action:
module: ec2_vpc
state: present
cidr_block: 10.0.0.0/16
resource_tags: "{}"
subnets:
- cidr: 10.0.0.0/16
internet_gateway: True
route_tables:
- subnets:
- 10.0.0.0/16
routes:
- dest: 0.0.0.0/0
gw: igw
region: '{{ region }}'
wait: yes
register: vpc
and the following two commands to remove it.
The first command removes the subnets, the IGW and the routing tables.
- name: remove subnets and route tables from VPC
local_action:
module: ec2_vpc
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
state: present
resource_tags: "{}"
subnets: []
internet_gateway: False
route_tables: []
wait: yes
and then I can remove the actual VPC
- name: delete VPC
local_action:
module: ec2_vpc
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
state: absent
resource_tags: "{}"
wait: yes
Hope this helps you.
Ansible has provided ec2_vpc_net module for VPC, you can create or delete the VPC.
Ex:
- name: DELETE THE VPC
ec2_vpc_net:
name: vpc_name
cidr_block: "{{ build_env.vpc_net_cidr }}".
region: "{{ build_env.region }}"
profile: "{{ build_env.profile }}"
state: absent
purge_cidrs: yes
register: vpc_delete
Hope this helps

Resources