How to change an existing AWS VPC resource with Ansible? - ansible

I have just created a vpc on AWS with ansible like so:
- name: Ensure VPC is present
ec2_vpc:
state: present
region: "{{ ec2_region }}"
cidr_block: "{{ ec2_subnet }}"
subnets:
- cidr: "{{ ec2_subnet }}"
route_tables:
- subnets:
- "{{ ec2_subnet }}"
routes:
- dest: 0.0.0.0/0
gw: igw
internet_gateway: yes # we want our instances to connect internet
wait: yes # wait for the VPC to be in state 'available' before returning
resource_tags: { Name: "my_project", environment: "production", tier: "DB" } # we tag this VPC so it will be easier for us to find it later
Note that this task is called from another playbook that takes care of filling the variables.
Also note that I am aware the ec2_vpc module is deprecated and I should update this piece of code soon.
But I think those points are not relevant to the question.
So you can notice in the resource_tags I have a name for that project. When I first ran this playbook, I did not have a name.
The VPC got created successfully the first time but then I wanted to add a name to it. So I tried adding it in the play, without knowing exactly how Ansible would know I want to update the existing VPC.
Indeed Ansible created a new VPC and did not update the existing one.
So the question is, how do I update the already existing one instead of creating a new resource?

After a conversation on the ansible irc (can't remember the usernames, sorry) it appears this specific module is not good at updating resources.
It looks like the answer lies in the new ec2_vpc_net module, specifically with the option multi_ok which by default won't duplicate VPCs with the same name and CIDR blocks.
docs: http://docs.ansible.com/ansible/ec2_vpc_net_module.html#ec2-vpc-net

Related

Ansible role dependencies and facts with delegate_to

The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.

Get Name interface of network with ansible

I want to deploy an ovirt template with Ansible, but the problem is that when I want to apply the cloud-init the template displays a different network interface file name every time, that is,
sometimes it is eth0, other times ens33 etc. How could I get this information to be able to apply the cloud init correctly.
Thank you.
How about the ovirt_nic_info module?
If you need to gather data there are many ovirt _info modules.
If you have further issues/questions you can open an issue on
https://github.com/ovirt/ovirt-ansible-collection
- ovirt_nic_info:
auth: "{{ ovirt_auth }}"
vm: centos8
register: result
- debug:
msg: "{{ result.ovirt_nics[0].reported_devices[0].name }}"

Provisioning multiple spot instances with instance_interruption_behavior using Ansible

I know it seems to relate to other questions asked in the past but I feel it's not.
You will be the judge of that.
I've been using Ansible for 1.5 years now and I'm aware it's not the best tool to provision infrastructure , Yet for my needs it suffice.
I'm using AWS as my cloud provider
Is there any way to cause Ansible to provision multiple spot instances of the same type( similar to the count attribute in the ec2 ansible module) with the instance_interruption_behavior set to stop(behavior) instead of the default terminate?
My Goal:
Set up multiple EC2 instances with different ec2_spot_requests to Amazon. ( spot request per instance)
But instead of using the default instance_interruption_behavior which is terminate, I wish to set it to stop(behavior )
What I've done so far:
I'm aware that Ansible has the following module to handle ec2 infrastructure .
Mainly the
ec2.
https://docs.ansible.com/ansible/latest/modules/ec2_module.html
Not to be confused with the
ec2_instance module.
https://docs.ansible.com/ansible/latest/modules/ec2_instance_module.html
I could use the ec2 module to set up spot instances , As I've done so far, as follows:
ec2:
key_name: mykey
region: us-west-1
instance_type: "{{ instance_type }}"
image: "{{ instance_ami }}"
wait: yes
wait_timeout: 100
spot_price: "{{ spot_bid }}"
spot_wait_timeout: 600
spot_type: persistent
group_id: "{{ created_sg.group_id }}"
instance_tags:
Name: "Name"
count: "{{ my_count }}"
vpc_subnet_id: " {{ subnet }}"
assign_public_ip: yes
monitoring: no
volumes:
- device_name: /dev/sda1
volume_type: gp2
volume_size: "{{ volume_size }}"
Pre Problem:
The above sample module is nice and clean and allow me to setup spot instance according to my requirements , by using the ec2 module and the count attribute.
Yet, It doesn't allow me to set the instance_interruption_behavior to stop( unless I'm worng), It just defaults to terminate.
Other modules:
To be released on Ansible version 2.8.0 ( Currently only available in develop) , There is a new module called: ec2_launch_template
https://docs.ansible.com/ansible/devel/modules/ec2_launch_template_module.html
This module gives you more attributes , which allows you ,with the combination of ec2_instance module to set the desired instance_interruption_behavior to stop(behavior).
This is what I did:
- hosts: 127.0.0.1
connection: local
gather_facts: True
vars_files:
- vars/main.yml
tasks:
- name: "create security group"
include_tasks: tasks/create_sg.yml
- name: "Create launch template to support interruption behavior 'STOP' for spot instances"
ec2_launch_template:
name: spot_launch_template
region: us-west-1
key_name: mykey
instance_type: "{{ instance_type }}"
image_id: "{{ instance_ami }}"
instance_market_options:
market_type: spot
spot_options:
instance_interruption_behavior: stop
max_price: "{{ spot_bid }}"
spot_instance_type: persistent
monitoring:
enabled: no
block_device_mappings:
- device_name: /dev/sda1
ebs:
volume_type: gp2
volume_size: "{{ manager_instance_volume_size }}"
register: my_template
- name: "setup up a single spot instance"
ec2_instance:
instance_type: "{{ instance_type }}"
tags:
Name: "My_Spot"
launch_template:
id: "{{ my_template.latest_template.launch_template_id }}"
security_groups:
- "{{ created_sg.group_id }}"
The main Problem:
As it shown from the above snippet code, the ec2_launch_template Ansible module does not support the count attribute and the ec2_instance module doesn't allow it either.
Is there any way cause ansible to provision multiple instances of the same type( similar to the count attribute in the ec2 ansible module)?
In general if you are managing lots of copies of EC2 instances at once you probably want to either use Auto-scaling groups or EC2 Fleets.
There is an ec2_asg module that allows you to manage auto-scaling groups via ansible. This would allow you to deploy multiple EC2 instances based on that launch template.
Another technique to solve this issue would be to use the cloudformation or terraform modules if AWS cloudformation templates or terraform plans support what you want to do. This would also let you use Jinja2 templating syntax to do some clever template generation before supplying the infrastructure templates if necessary.

Allocate EC2 through Ansible

When creating an EC2 instance through Ansible, how do you specify the security group (this is for a Amazon VPC environment that is not the account default)? In my case, I am attempting to assign a security group (that currently exists) to my webserver EC2 instance that restricts traffic to only the traffic coming from the ELB that sits in front of it. If I try the following:
- name: Create webserver instance
ec2:
key_name: "{{ project_name }}-{{ env }}-key"
image: "{{ image }}"
instance_type: "{{ instance_type }}"
instance_tags: '{"Name":"{{ project_name }}-{{ env }}-{{ webserver_name }}","Owner":"{{ project_name }}", "Type":"{{ webserver_name }}","Environment":"{{ env }}"}'
region: "{{ aws_region }}"
group: "{{ security_group_name }}"
wait: true
register: ec2
where {{ security_group_name }} is the 'Group Name' found in the AWS console, I receive the following error: 'Value () for parameter groupId is invalid. The value cannot be empty'
If I try the following:
- name: Create webserver instance
ec2:
key_name: "{{ project_name }}-{{ env }}-key"
image: "{{ image }}"
instance_type: "{{ instance_type }}"
instance_tags: '{"Name":"{{ project_name }}-{{ env }}-{{ webserver_name }}","Owner":"{{ project_name }}", "Type":"{{ webserver_name }}","Environment":"{{ env }}"}'
region: "{{ aws_region }}"
group_id: "{{ security_group_id }}"
wait: true
register: ec2
where {{ security_group_id }} is the 'Group Id' found in the AWS console (such as sg-xxxxxx), I receive the same error. The Ansible documentation stated that 'group' and 'group_id' are for the specification of the security group (http://docs.ansible.com/ansible/ec2_module.html).
The only thing I can think of is that AWS cannot find the security group because I am creating it and it does not know the VPC to place it in or it is placing it in my default VPC and cannot find the security group as it is in a different VPC.
So, maybe a better question is, how to I specify the VPC for a particular EC2 instance (when I have multiple VPCs in a region)?
I receive the following error: 'Value () for parameter groupId is invalid. The value cannot be empty'
It sounds like those variables aren't defined; where are you defining them? Does it work when you don't use a variable, but put in the value directly?
So, maybe a better question is, how to I specify the VPC for a particular EC2 instance (when I have multiple VPCs in a region)?
The VPC is implied by other values, notably group/group_id and vpc_subnet_id.
One of the main advantages of using group_id instead of group is that you'll get an error if you accidentally use a subnet for the wrong VPC, whereas if you have security groups with the same names in multiple VPCs, using group and an incorrect vpc_subnet_id will successfully launch a machine in the wrong place.
There is no way to specify the VPC ID in your playbook. Instead you specify the vpc_subnet_id. AWS can figure out the VPC based on subnet id. You are not specifying the subnet id, so AWS assumes that you want to launch it in EC2 Classic. Since the specified security group is not found in EC2 Classic, you are getting the not found or invalid error.
How to fix this? Find the id of the subnet where you want to launch the instance. Each VPC can have one more public and/or private subnets. From AWS dashboard, select VPC Service which will list the subnets and VPCs. Find the subnet you want to launch in, and specify its id in the playbook.
vpc_subnet_id: subnet-29e63245

Cannot access machine after creating trough ec2 module within same script

I have problems with my playbook which should create new EC2 instances trough built-in module and connect to them to set some default stuff.
I went trough lot of tutorials/posts, but none of them mentioned same problem, therefor i'm asking there.
Everything, in terms of creating goes well, but when i have instances created, and successfully waited for SSH to come up. I got error which says machine is unreachable.
UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
I tried to connect manually (from terminal to the same host) and i was successful (while the playbook was waiting for connection). I also tried to increase timeout generally in ansible.cfg. I verified that given hostname is valid (and it is) and also tried public ip instead of public DNS, but nothing helps.
basically my playbook looks like that
---
- name: create ec2 instances
hosts: local
connection: local
gather_facts: False
vars:
machines:
- { type: "t2.micro", instance_tags: { Name: "machine1", group: "some_group" }, security_group: ["SSH"] }
tasks:
- name: lunch new ec2 instances
local_action: ec2
group={{ item.security_group }}
instance_type={{ item.type}}
image=...
wait=true
region=...
keypair=...
count=1
instance_tags=...
with_items: machines
register: ec2
- name: wait for SSH to come up
local_action: wait_for host={{ item.instances.0.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.results
- name: add host into launched group
add_host: name={{ item.instances.0.public_ip }} group=launched
with_items: ec2.results
- name: with the newly provisioned EC2 node configure basic stuff
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- common
Note: in many tutorials are results from creating ec2 instances accessed in different way, but thats probably for different question.
Thanks
Solved:
I don't know how, but it suddenly started to work. No clue. In case i will find some new info, will update this question
A couple points that may help:
I'm guessing it's a version difference, but I've never seen a 'results' key in the registered 'ec2' variable. In any case, I usually use 'tagged_instances' -- this ensures that even if the play didn't create an instance (ie, because a matching instance already existed from a previous run-through), the variable will still return instance data you can use to add a new host to inventory.
Try adding 'search_regex: "OpenSSH"' to your 'wait_for' play to ensure that it's not trying to run before the SSH daemon is completely up.
The modified plays would look like this:
- name: wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started search_regex="OpenSSH"
with_items: ec2.tagged_instances
- name: add host into launched group
add_host: name={{ item.public_ip }} group=launched
with_items: ec2.tagged_instances
You also, of course, want to make sure that Ansible knows to use the specified key when SSH'ing to the remote host either by adding 'ansible_ssh_private_key_file' to the inventory entry or specifying '--private-key=...' on the command line.

Resources