I don't understand properly how to use correctly template_parameters parameter (http://docs.ansible.com/ansible/cloudformation_module.html)
So, looks like I could use this param to override some param in template. Very simple configuration is
sandbox_cloudformation.yml
---
- name: Sandbox CloudFormation
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch Ansible CloudFormation Stack
cloudformation:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
stack_name: "{{ aws_default_cloudformation_stack_name }}"
state: "present"
region: "{{ aws_default_region }}"
disable_rollback: true
template: "files/cloudformation.yml"
args:
template_parameters:
GroupDescription: "Sandbox Security Group"
cloudformation.yml
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: "DEMO"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
But I got next error:
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Parameter values specified for a template which does not require them."}
In addition, I got this error, if try to use, for example
template_parameters:
KeyName: "{{ aws_default_keypair }}"
InstanceType: "{{ aws_default_instance_type }}"
Also, please advice with best approach to use cloudformation module for Ansible. Maybe best way is generate cloud formation template and in the next step use it? Like ..
- name: Render Cloud Formation Template
template: src=cloudformation.yml.j2 dest=rendered_templates/cloudformation.yml
- name: Launch Ansible CloudFormation Stack
cloudformation:
template: "rendered_templates/cloudformation.yml"
Thanks in advance!
You can use template_parameters to pass parameters to a CloudFormation template. In the template you'll refer to the parameters using Ref. In your case:
playbook:
...
args:
template_parameters:
GroupDescriptionParam: "Sandbox Security Group"
...
template:
...
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription:
Ref: GroupDescriptionParam
...
See
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html#w1ab2c19c12d282c19
for examples of AWS SecurityGroup CloudFormation templates.
Instead of using template_parameters arguments in Ansible cloudformation module, I have found it handy to create a Jinja2 template, convert it to a CloudFormation template using Ansible template module, just like you suggested in the end of your question. Using this approach you can then omit args and template_parameters in cloudformation module call.
For example:
- name: Generate CloudFormation template
become: no
run_once: yes
local_action:
module: template
src: "mycloudformation.yml.j2"
dest: "{{ cf_templ_dir }}/mycloudformation.yml"
tags:
- cloudformation
- cf_template
- name: Deploy the CloudFormation stack
become: no
run_once: yes
local_action:
module: cloudformation
stack_name: "{{ cf_stack_name }}"
state: present
region: "{{ ec2_default_region }}"
template: "{{ cf_templ_dir }}/mycloudformation.yml"
register: cf_result
tags:
- cloudformation
- name: Show CloudFormation output
run_once: yes
become: no
debug: var=cf_result
tags:
- cloudformation
And mycloudformation.yml.j2:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: {{ group_desc_param_defined_somewhere_in_ansible }}
...
Related
I have a role which uses with_items:
- name: Create backup config files
template:
src: "config.yml.j2"
dest: "/tmp/{{ project }}_{{ env }}_{{ item.type }}.yml"
with_items:
- "{{ backups }}"
I can access the item.type, as usual, but not project or env which are defined outside the collection:
deploy/main.yml
- hosts: ...
vars:
project: ...
rails_env: qa
roles:
- role: ../../../roles/deploy/dolly
project: "{{ project }}"
env: "{{ rails_env }}"
backups:
- type: mysql
username: ...
password: ...
The error I get is:
Error was a <class 'ansible.errors.AnsibleError'>, original message: An unhandled exception occurred while templating '{{ project }}'
The template, config.j2.yml, is:
type: {{ item.type }}
project: {{ project }}
env: {{ env }}
database:
username: {{ item.username }}
password: {{ item.password }}
It turns out for can't redefine a var with the same name as an existing var, so project: {{ project }} will always fail with an error.
Instead project can be omitted and the existing definition, in vars, can be used.
- hosts: ...
vars:
project: ... # <- already defined here
roles:
- role: ../../../roles/deploy/dolly
backups:
- type: mysql
username: ...
password: ...
If the var is not defined in vars can be defined in the role:
- hosts: ...
vars:
name: ...
roles:
- role: ../../../roles/deploy/dolly
project: "{{ name }}" # <- define here
backups:
- type: mysql
username: ...
password: ...
I'm trying to get Ansible to create a launch configuration (using ec2_lc) and auto scaling group (ec2_asg). My playbook creates the launch configuration just fine, but the ASG is creating an error:
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No launch config found with name {'name': 'dockerLC', 'instance_type': 't2.micro', 'image_id': 'ami-e97c548c', 'changed': False, 'failed': False, 'arn': None, 'created_time': 'None', 'security_groups': ['sg-e0c0de88'], 'result': {'image_id': 'ami-e97c548c', 'key_name': 'ansible-keys', 'launch_configuration_name': 'dockerLC', 'ebs_optimized': False, 'instance_type': 't2.micro', 'created_time': 'None', 'associate_public_ip_address': True, 'instance_monitoring': False, 'placement_tenancy': 'default', 'security_groups': ['sg-e0c0de88'], 'block_device_mappings': []}}"}
I'm unsure why this error is being thrown because I certainly have a dockerLC launch configuration in the AWS EC2 web interface. My playbook is below.
tasks:
- name: Create Launch Configuration
local_action:
module: ec2_lc
name: dockerLC
region: '{{ aws_region }}'
image_id: '{{ ami_id }}'
key_name: '{{ key_name }}'
security_groups: '{{ security_group_id }}'
# security_groups: '{{ security_group }}'
instance_type: '{{ instance_type }}'
assign_public_ip: True
vpc_id: '{{ vpc }}'
register: lc
# - pause:
# seconds: 10
- name: Create Auto Scaling Group
local_action:
module: ec2_asg
name: dockerASG
region: '{{ aws_region }}'
launch_config_name: '{{ lc }}'
min_size: 3
max_size: 5
wait_for_instances: True
availability_zones: ['us-east-2a', 'us-east-2b', 'us-east-2c']
vpc_zone_identifier: ' {{ vpc_zones }} '
tags:
- function: docker
billingGroup: Work
When you register a variable for tasks the variable is usually a hash containing more information than just a string. In this case, there is a hash containing:
{
'name':'dockerLC',
'instance_type':'t2.micro',
'image_id':'ami-e97c548c',
'changed':False,
'failed':False,
'arn':None,
'created_time':'None',
'security_groups':[
'sg-e0c0de88'
],
'result':{
'image_id':'ami-e97c548c',
'key_name':'ansible-keys',
'launch_configuration_name':'dockerLC',
'ebs_optimized':False,
'instance_type':'t2.micro',
'created_time':'None',
'associate_public_ip_address':True,
'instance_monitoring':False,
'placement_tenancy':'default',
'security_groups':[
'sg-e0c0de88'
],
'block_device_mappings':[
]
}
}
But launch_config_name wants just the name as a string. In this case, you want to reference launch_config_name: '{{ lc.name }}' which is the name of the launch configuration. You could likely also use the ariable lc.result.launch_configuration_name as well (as an example).
Started to experiment with Ansible and using playbooks to automate some routine tasks on network devices. I was able to get some basic stuff working and learn in the process but I know my knowledge is limited so when I see this playbook and how much stuff seems redundant I have to assume there are better ways to eliminate some of the redundancy and make things cleaner and more efficient.
Example I want to try to use and explain in order to get some ideas on is around configuring a new vlan on a group of devices.
Typically a new vlan first needs to be configured on the two distribution switches and then there are specific interfaces on those two switches that we have to add the vlan to.
So, for this first part I have the two hosts in a group called "dist" in my hosts file:
[dist]
DIST01 ansible_host=10.10.1.1
DIST02 ansible_host=10.10.1.2
Then I created the following in my playbook:
- name: Add Heartbeat VLAN to DIST
hosts: dist
connection: local
gather_facts: no
tasks:
- name: Include Login Credentials
include_vars: secrets.yml
- name: Define Provider
set_fact:
provider:
host: "{{ ansible_host }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
tasks:
- name: Ensure VLAN Exists
provider: "{{ provider }}"
nxos_vlan: vlan_id="2600" state=present host={{ ansible_host }}
- name: Ensure VLAN Name Configured
provider: "{{ provider }}"
nxos_vlan: vlan_id={{ item.vid }} name={{ item.name }} host={{ ansible_host }} state=present
with_items:
- { vid: 2600, name: Ansible Heartbeat VLAN }
- name: ASSIGN VLAN TO TRUNK PORTS
nxos_switchport:
interface: "{{ item.interface }}"
mode: trunk
trunk_vlans: "{{ item.vlan }}"
provider: "{{ provider }}"
with_items:
- { interface: po850, vlan: 2600 }
- { interface: po860, vlan: 2600 }
- { interface: po865, vlan: 2600 }
- { interface: po868, vlan: 2600 }
- { interface: po871, vlan: 2600 }
- { interface: po872, vlan: 2600 }
- { interface: po875, vlan: 2600 }
- { interface: po877, vlan: 2600 }
- { interface: po884, vlan: 2600 }
So, for each host in that group it iterates through a list of interfaces / ports and adds the vlan specified.
Question #1.
First thing that stands out as being "inefficient" in my mind is I don't believe its very wise to have to specify the "vlan: 2600" every where.
I would think I should just set the vlan as a variable some where (in the playbook? in some other file that gets called?) to be used in each case where it is needed.
Next set of tasks:
After the previous task the next requires us to connect to each access switch that needs the vlan to be deployed on and configure the new vlan there.
The issue I run into here is that the port-channel on each of these switches is a different interface #. So I can't apply the same config by just iterating through a list of devices.
For instance what I have to do is something like this:
host: ACCESS01 interface: po850 vlan: 2600
host: ACCESS02 interface: po860 vlan: 2600
host: ACCESS03 interface: po870 vlan: 2600
So for each host/switch you add the vlan to the interface associated with that switch.
I just created a new task for each device that specifies the interface to configure for that switch.
Example:
- name: Add Heartbeat VLAN to ACCESS01
hosts: ACCESS01
connection: local
gather_facts: no
tasks:
- name: Include Login Credentials
include_vars: secrets.yml
- name: Define Provider
set_fact:
provider:
host: "{{ ansible_host }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
tasks:
- name: Ensure VLAN Exists
provider: "{{ provider }}"
nxos_vlan: vlan_id="2600" state=present host={{ ansible_host }}
- name: Ensure VLAN Name Configured
provider: "{{ provider }}"
nxos_vlan: vlan_id={{ item.vid }} name={{ item.name }} host={{ ansible_host }} state=present
with_items:
- { vid: 2600, name: Ansible Heartbeat VLAN }
- name: ASSIGN VLAN TO PORTS
nxos_switchport:
interface: "{{ item.interface }}"
mode: trunk
trunk_vlans: "{{ item.vlan }}"
provider: "{{ provider }}"
with_items:
- { interface: po850, vlan: 2600 }
- name: Add Heartbeat VLAN to ACCESS02
hosts: ACCESS02
connection: local
gather_facts: no
tasks:
- name: Include Login Credentials
include_vars: secrets.yml
- name: Define Provider
set_fact:
provider:
host: "{{ ansible_host }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
tasks:
- name: Ensure VLAN Exists
provider: "{{ provider }}"
nxos_vlan: vlan_id="2600" state=present host={{ ansible_host }}
- name: Ensure VLAN Name Configured
provider: "{{ provider }}"
nxos_vlan: vlan_id={{ item.vid }} name={{ item.name }} host={{ ansible_host }} state=present
with_items:
- { vid: 2600, name: Ansible Heartbeat VLAN }
- name: ASSIGN VLAN TO PORTS
nxos_switchport:
interface: "{{ item.interface }}"
mode: trunk
trunk_vlans: "{{ item.vlan }}"
provider: "{{ provider }}"
with_items:
- { interface: po860, vlan: 2600 }
- name: Add Heartbeat VLAN to ACCESS03
hosts: ACCESS03
connection: local
gather_facts: no
tasks:
- name: Include Login Credentials
include_vars: secrets.yml
- name: Define Provider
set_fact:
provider:
host: "{{ ansible_host }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
tasks:
- name: Ensure VLAN Exists
provider: "{{ provider }}"
nxos_vlan: vlan_id="2600" state=present host={{ ansible_host }}
- name: Ensure VLAN Name Configured
provider: "{{ provider }}"
nxos_vlan: vlan_id={{ item.vid }} name={{ item.name }} host={{ ansible_host }} state=present
with_items:
- { vid: 2600, name: Ansible Heartbeat VLAN }
- name: ASSIGN VLAN TO PORTS
nxos_switchport:
interface: "{{ item.interface }}"
mode: trunk
trunk_vlans: "{{ item.vlan }}"
provider: "{{ provider }}"
with_items:
- { interface: po870, vlan: 2600 }
And so you see... I know when I see things almost identical repeated over and over again I have to assume there is a better way and I just don't know enough yet to solve on my own.
Question #2. I suspect there is a better way to handle repeating the following for each task in the playbook:
tasks:
- name: Include Login Credentials
include_vars: secrets.yml
- name: Define Provider
set_fact:
provider:
host: "{{ ansible_host }}"
username: "{{ creds['username'] }}"
password: "{{ creds['password'] }}"
Question #3, Could I possibly just list this data some where, either in the playbook or another file maybe and then create a task that could iterate through the data to determine what port needs to be configured?
host: ACCESS01 interface: po850 vlan: 2600
host: ACCESS02 interface: po860 vlan: 2600
host: ACCESS03 interface: po870 vlan: 2600
Some sort of logic to this in my mind would be something like, if "host" equals "ACCESS01" then interface equals po850.
So the task could just be referencing variables that are populated depending on the host its currently working on?
Any thoughts and advice on improving both the playbook and my knowledge of things is greatly appreciated. I guess I'm look for the most "ansiblistic" way to accomplish this. That's not a word huh?
For Question#1, you can use like this:
- name: ASSIGN VLAN TO TRUNK PORTS
nxos_switchport:
interface: "{{ item.interface }}"
mode: trunk
trunk_vlans: "{{ item.vlan | default('2600') }}"
provider: "{{ provider }}"
with_items:
- interface: po850
- interface: po860
- interface: po865
- interface: po868
- interface: po871
- interface: po872
- interface: po875
- interface: po884
If you want to assign different vlan to one or more interface(s), then you can use like this:
- { interface: po850, vlan: 2700 }
Hope that help you.
I need to get JMX metrics from the Hazelcast product. I have created a Logstash process that connects to the JMX port. This process has to read a json where is the information of the hostname, port, cluster, environment, etc of Hazelcast JMX. I need to deploy on the Logstash machines the json file for each Hazelcast machine / port. In this case there are three Hazelcast machines and a total of 6 processes with different ports.
Example data:
Hazelcast Hostnames: hazelcast01, hazelcast02, hazelcast03
Hazelcast Ports: 6661, 6662, 6663, 6664, 6665
Logstash Hostnames: logstash01, logstash02, logstash03
Dictionary of Hazelcast information in Ansible:
logstash_hazelcast_jmx:
- hazelcast_pre:
name: hazelcast_pre
port: 15554
cluster: PRE
- hazelcast_dev:
name: hazelcast_dev
port: 15555
cluster: DEV
Example of task in Ansible:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ item }}_{{ item.value.cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_dict:
- "{{ groups['HAZELCAST'] }}"
- logstash_hazelcast_jmx
The final result should be as follows:
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_DEV.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_PRE.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast02_DEV.json
...
Here is an example of the json content:
{
"host" : "{{ hostname of groups['HAZELCAST' }}",
"port" : {{ item.value.port }},
"alias" : "{{ hostname of groups['HAZELCAST' }}_{{ item.value.cluster }}",
"queries" : [
{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=XXX,name=YYY",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
} ,{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=IMap,name=user",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
}
]
}
I think the problem I have is that the with_dict option does not allow using a listing of inventory hosts and a dictionary.
How can I get this generation of json files for each machine / port?
If you run your playbook against logstash hosts, you can use with_nested:
---
- hosts: logstash_hosts
tasks:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ helper_host }}_{{ helper_cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_nested:
- "{{ groups['HAZELCAST'] }}"
- "{{ logstash_hazelcast_jmx }}"
vars:
helper_host: "{{ item.0 }}"
helper_cluster: "{{ item.1.cluster }}"
helper_port: "{{ item.1.port }}"
I also used helper variables with more meaningful names. You should also modify your template with either helper vars or item.0, item.1 – where item.0 is a host from HAZELCAST group, and item.1 is an item from logstash_hazelcast_jmx list.
I am creating an ansible playbook using version 2.0.0-0.3.beta1
I'd like to get the subnet id after i create a subnet. I'm refering the official ansible docs : http://docs.ansible.com/ansible/ec2_vpc_route_table_module.html
- name: Create VPC Public Subnet
ec2_vpc_subnet:
state: present
resource_tags: '{"Name":"{{ prefix }}_subnet_public_0"}'
vpc_id: "{{ vpc.vpc_id }}"
az: "{{ az0 }}"
cidr: 172.16.0.0/24
register: public
- name: Create Public Subnet Route Table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
tags:
Name: Public
subnets:
- "{{ public.subnet_id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
after running the playbook i received following error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'dict object' has no attribute 'subnet_id'"}
Try using: public.subnet.id instead of public.subnet-id
Its useful to debug by running this task:
- debug: msg="{{ public }}"