Launch EC2 instances with a Name - ansible

I have a playbook that launches n EC2 instances. When launching, it tags them with Env: {{ env }} and ManagedBy: Ansible, but I'd also like to add a name with some semblance of meaning. For me, this is usually something with a format like <env>-<purpose><number> (e.g. dev-web02 or prd-db01).
I'd really like Ansible to do that for me, if at all possible.
I currently have my playbook built out such that {{ exact_count }} instances are verified to exist using the ec2 module. Created instances are then given a name tag using the ec2_tag module:
- name: Instances | Tag each new instance with a name
ec2_tag:
aws_access_key: "{{ awscli.access_key }}"
aws_secret_key: "{{ awscli.secret_key }}"
region: "{{ aws.region }}"
resource: "{{ item.1.id }}"
state: present
tags:
Name: "{{ env_short }}-{{ 'ws%02d' | format(item.0) }}"
with_indexed_items: webservers.instances
This works great, the first time it's ever run. The problem is that it iterates not across all of my instances, but only across those that may've been created during this run of the playbook.
Is there a better way to do this such that new instances are able to detect the sequence and continue it so that on a second run, for example, after I've changed {{ exact_count }} from 1 to 2, the server created by that second run will get a Name value of dev-ws02?
This may be a lot to ask of Ansible and there may be a good reason I haven't found an answer, but sometimes you get pleasantly surprised, so I thought I'd ask.

Empirically, I've figured out that using the tagged_instances property seems to work. This value appears to contain every server that matches the tags, not just any that were just created. At the moment, I can't remember whether the ec2_tag module simply won't overwrite a tag that already exists or if existing tags are updated, but the net result seems to be working out the way I'd like it to work out.
Here is my updated task:
- name: Instances | Tag each new instance with a name
ec2_tag:
aws_access_key: "{{ awscli.access_key }}"
aws_secret_key: "{{ awscli.secret_key }}"
region: "{{ aws.region }}"
resource: "{{ item.1.id }}"
state: present
tags:
# e.g. dev-web03, prd-anz01
Name: "{{ env_short }}-{{ server_type_abbrev }}{{ '%02d' | format(item.0) }}"
with_indexed_items: ec2.tagged_instances

Related

Ansible - multiple items in path, but cannot use loop

I'm not sure how to describe the title or my question properly, feel free to edit.
I'll jump right in. I have this working piece of Ansible code:
- file:
path: "{{ item.item.value.my_folder }}/{{ item.item.value.filename }}"
state: absent
loop: "{{ my_stat.results }}"
when: item.stat is defined and item.stat.exists and item.stat.islnk
If Ansible is run, the task is executed properly, and the file is removed from the system.
Now, the issue. What I want Ansible to do is loop over multiple items described in "path". This way I won't have to create a seperate task for each filename I want to be deleted.
Example:
- file:
path:
- "{{ item.item.value.my_folder }}/{{ item.item.value.filename }}"
- "{{ item.item.value.my_folder }}/{{ item.item.value.other_filename }}"
state: absent
loop: "{{ my_stat.results }}"
when: item.stat is defined and item.stat.exists and item.stat.islnk
But Ansible doesn't proces the items in the list described in 'path', so the filesnames will not be deleted.
I see I cannot use 'loop', since it is already in use for another value.
Question: How would I configure Ansible so that I can have multiple items in the path and let Ansible delete the filenames, and keeping the current loop intact.
-- EDIT --
Output of the tasks:
I've removed the pastebin url since I believe it has no added value for the question, and the answer has been given.
As described in the documentation, path is of type path, so Ansible will only accept a valid path in there, not a list.
What you can do, though, is to slightly modify your loop and make a product between your existing list and a list of the filenames properties you want to remove, then use those as the key to access item.item.value (or item.0.item.value now, since we have the product filter applied).
For example:
- file:
path: "{{ item.0.item.value.my_folder }}/{{ item.0.item.value[item.1] }}"
state: absent
loop: "{{ my_stat.results | product(['filename', 'other_filename']) }}"
when:
- item.0.stat is defined
- item.0.stat.exists
- item.0.stat.islnk
PS: a list in a when is the same as adding and statements in the said when

ansible how to load variables from another role, without executing it?

I have a task to create a one-off cleanup playbook which is using variables from a role, but i don't need to execute that role. Is there a way to provide a role name to get everything from it's defaults and vars, without hardcoding paths to it? I also want to use vars defined in group_vars or host_vars with higher precedence than the ones included from role.
Example task:
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ kafka_service_name }}"
- "{{ zookeeper_service_name }}"
ignore_errors: true
where kafka_service_name and zookeeper_service_name are contained in role kafka, but may also be present in i.e. group_vars.
I came up with a fairly hacky solution, which looks like this:
- name: save old host_vars
set_fact:
old_host_vars: "{{ hostvars[inventory_hostname] }}"
- name: load kafka role variables
include_vars:
dir: "{{ item.root }}/{{ item.path }}"
vars:
params:
files:
- kafka
paths: "{{ ['roles'] + lookup('config', 'DEFAULT_ROLES_PATH') }}"
with_filetree: "{{ lookup('first_found', params) }}"
when: item.state == 'directory' and item.path in ['defaults', 'vars']
- name: stop kafka and zookeeper services if they exist
service:
name: "{{ item }}"
state: stopped
with_items:
- "{{ old_host_vars['kafka_service_name'] | default(kafka_service_name) }}"
- "{{ old_host_vars['zookeeper_service_name'] | default(zookeeper_service_name) }}"
include_vars task finds the first kafka role folder in ./roles and in default role locations, then includes files from directories defaults and vars, in correct order.
I had to save old hostvars due to include_vars having higher priority than anything but extra vars as per ansible doc, and then using included var only if old_host_vars returned nothing.
If you don't have a requirement to load group_vars - include vars works quite nice in one task and looks way better.
UPD: Here is the regexp that i used to replace vars with old_host_vars hack.
This was tested in vscode search/replace, but can be adjusted for any other editor
Search for vars that start with kafka_:
\{\{ (kafka_\w*) \}\}
Replace with:
{{ old_host_vars['$1'] | default($1) }}

Can I use Jinja2 `map` filter in an Ansible play to get values from an array of objects?

I have a playbook for creating some EC2 instances and then doing some stuff with them. The relevant pieces are approximately like:
- name: create ec2 instances
ec2:
id: '{{ item.name }}'
instance_type: '{{ item.type }}'
register: ec2
with_items: '{{ my_instance_defs }}'
- name: wait for SSH
wait_for:
host: '{{ item.instances[0].private_ip }}'
port: 22
with_items: '{{ ec2.results }}'
This works as intended, but I am not especially happy with the item.instances[0].private_ip expression, partly because it shows really large objects in the play summary. I would love to have the with_items part just be an array of IP addresses, rather than an array of objects with arrays of objects inside them. In Python, I would just do something like:
ips = [r['instances'][0]['private_ip'] for r in ec2['results']]
And then I would use with_items: '{{ ips }}' in the second task.
Is there a way I can do the same thing using a J2 filter in the YAML of the play? Seems like http://docs.ansible.com/ansible/playbooks_filters.html#extracting-values-from-containers might be helpful, but I think that presupposes I have an array of keys/indices/whatever.
map filter it your friend here.
Something like this:
with_items: "{{ ec2.results | map(attribute='instances') | map('first') | map(attribute='private_ip') | list }}"
The code above is not tested.
You may want to try with debug first and gradually add maps to get required result.
Don't forget to put | list at the end to make your map readable.
My example is pulled from my playbook removing a autoscaling ecs cluster. I modified the above answer to get mine working.
- name: get list of instances in ASG
ec2_instance_facts:
filters:
"tag:aws:autoscaling:groupName": "{{item.name}}-{{stack}}-scalinggroup"
register: asg_host_list
- name: list ecs info
debug:
msg: "{{asg_host_list}}"
- name: get just hosts id's
set_fact:
hostlist: "{{ asg_host_list.instances | map(attribute='instance_id') | list }}"
For my use hostlist can be fed directly into ecs_instance since it takes a list of instance ids to process.
So, this is tested and works.

How can Ansible "register" in a variable the result of including a playbook?

How can an Ansible playbook register in a variable the result of including another playbook?
For example, would the following register the result of executing tasks/foo.yml in result_of_foo?
tasks:
- include: tasks/foo.yml
- register: result_of_foo
How else can Ansible record the result of a task sequence?
The short answer is that this can't be done.
The register statement is used to store the output of a single task into a variable. The exact contents of the registered variable can vary widely depending on the type of task (for example a shell task will include stdout & stderr output from the command you run in the registered variable, while the stat task will provide details of the file that is passed to the task).
If you have an include file with an arbitrary number of tasks within it then Ansible would have no way of knowing what to store in the variable in your example.
Each individual task within your include file can register variables, and you can reference those variables elsewhere, so there's really no need to even do something like this.
I was able to do this by passing a variable name as a variable to be used in the task. I included my main.yaml and included cgw.yaml files below.
main.yaml:
- name: Create App A CGW
include: cgw.yaml
vars:
bgp_asn: "{{ asn_spoke }}"
ip_address: "{{ eip_app_a.public_ip }}"
name: cgw-app-a
region: "{{ aws_region }}"
aws_access_key: "{{ ec2_access_key }}"
aws_secret_key: "{{ ec2_secret_key }}"
register: cgw_app_a
cgw.yaml:
- name: "{{ name }}"
ec2_customer_gateway:
bgp_asn: "{{ bgp_asn }}"
ip_address: "{{ ip_address }}"
name: "{{ name }}"
region: "{{ region }}"
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
register: "{{ register }}"

Tag Name from EC2-group in Ansible

This is another question coming from the following post...:
loops over the registered variable to inspect the results in ansible
So basically having:
- name: EC2Group | Creating an EC2 Security Group inside the Mentioned VPC
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
description: "{{ item.sg_description }}"
region: "{{ vpc_region }}" # Change the AWS region here
vpc_id: "{{ vpc.vpc_id }}" # vpc is the resgister name, you can also set it manually
state: present
rules: "{{ item.sg_rules }}"
with_items: ec2_security_groups
register: aws_sg
- name: Tag the security group with a name
local_action:
module: ec2_tag
resource: "{{ item.group_id }}"
region: "{{ vpc_region }}"
state: present
tags:
Name: "{{vpc_name }}-group"
with_items: aws_sg.results
I wonder how is possible to get the TAG NAME
tags:
Name: "{{ item.sg_name }}"
The same value as per the primary name definition on the Security Groups?
local_action:
module: ec2_group
name: "{{ item.sg_name }}"
I am trying to make that possible but I am not sure how to do it. If it's also possible to retrieve that item?
Thanks!
Tags are available after the ec2.py inventory script is run - they always take the value of tag_key_value where 'key' is the name of the tag, and 'value' is the value within that tag. i.e. if you create a tag called 'Application' and give it a value of 'AwesomeApplication' you would get 'tag_Application_AwesomeApplication'.
That said, if you have just created instances and want to run some commands against those new instances, parse the output from the create instance command to get a list of the IP addresses, and add them to a temporary group, and then you can run commands against that group within the same playbook:
...
- name: add hosts to temporary group
add_host: name="{{ item }}" groups=temporarygroup
with_items: parsedipaddresses
- hosts: temporarygroup
tasks:
- name: awesome script to do stuff goes here
...

Resources