I'm trying to get Ansible to create a launch configuration (using ec2_lc) and auto scaling group (ec2_asg). My playbook creates the launch configuration just fine, but the ASG is creating an error:
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No launch config found with name {'name': 'dockerLC', 'instance_type': 't2.micro', 'image_id': 'ami-e97c548c', 'changed': False, 'failed': False, 'arn': None, 'created_time': 'None', 'security_groups': ['sg-e0c0de88'], 'result': {'image_id': 'ami-e97c548c', 'key_name': 'ansible-keys', 'launch_configuration_name': 'dockerLC', 'ebs_optimized': False, 'instance_type': 't2.micro', 'created_time': 'None', 'associate_public_ip_address': True, 'instance_monitoring': False, 'placement_tenancy': 'default', 'security_groups': ['sg-e0c0de88'], 'block_device_mappings': []}}"}
I'm unsure why this error is being thrown because I certainly have a dockerLC launch configuration in the AWS EC2 web interface. My playbook is below.
tasks:
- name: Create Launch Configuration
local_action:
module: ec2_lc
name: dockerLC
region: '{{ aws_region }}'
image_id: '{{ ami_id }}'
key_name: '{{ key_name }}'
security_groups: '{{ security_group_id }}'
# security_groups: '{{ security_group }}'
instance_type: '{{ instance_type }}'
assign_public_ip: True
vpc_id: '{{ vpc }}'
register: lc
# - pause:
# seconds: 10
- name: Create Auto Scaling Group
local_action:
module: ec2_asg
name: dockerASG
region: '{{ aws_region }}'
launch_config_name: '{{ lc }}'
min_size: 3
max_size: 5
wait_for_instances: True
availability_zones: ['us-east-2a', 'us-east-2b', 'us-east-2c']
vpc_zone_identifier: ' {{ vpc_zones }} '
tags:
- function: docker
billingGroup: Work
When you register a variable for tasks the variable is usually a hash containing more information than just a string. In this case, there is a hash containing:
{
'name':'dockerLC',
'instance_type':'t2.micro',
'image_id':'ami-e97c548c',
'changed':False,
'failed':False,
'arn':None,
'created_time':'None',
'security_groups':[
'sg-e0c0de88'
],
'result':{
'image_id':'ami-e97c548c',
'key_name':'ansible-keys',
'launch_configuration_name':'dockerLC',
'ebs_optimized':False,
'instance_type':'t2.micro',
'created_time':'None',
'associate_public_ip_address':True,
'instance_monitoring':False,
'placement_tenancy':'default',
'security_groups':[
'sg-e0c0de88'
],
'block_device_mappings':[
]
}
}
But launch_config_name wants just the name as a string. In this case, you want to reference launch_config_name: '{{ lc.name }}' which is the name of the launch configuration. You could likely also use the ariable lc.result.launch_configuration_name as well (as an example).
Related
I'm trying to collect the VMware datastore info from the ansible module community.vmware.vmware_datastore_info with the below task and debug the same:
- name: Gather info from datacenter about specific datastore
community.vmware.vmware_datastore_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ username }}'
password: '{{ password }}'
datacenter_name: '{{ datacenter_name }}'
name: ENV04
validate_certs: False
delegate_to: localhost
register: info
- debug:
msg: "{{ info }}"
The debug task is returning the below output:
TASK [Gather info from datacenter about specific datastore] ************************************************************************************************************
ok: [172.17.92.2 -> localhost]
TASK [debug] ***********************************************************************************************************************************************************
ok: [172.17.92.2] => {
"msg": {
"changed": false,
"datastores": [],
"failed": false
}
}
Rather than the one mentioned in the ansible-doc for this module:
RETURN VALUES:
- datastores
metadata about the available datastores
returned: always
sample:
- accessible: true
capacity: 5497558138880
datastore_cluster: datacluster0
freeSpace: 4279000641536
maintenanceMode: normal
multipleHostAccess: true
name: datastore3
nfs_path: /vol/datastore3
nfs_server: nfs_server1
provisioned: 1708109410304
type: NFS
uncommitted: 489551912960
url: ds:///vmfs/volumes/420b3e73-67070776/
Can anyone suggest what was the issue, as I was getting the expected output before and now it is not coming.
I have the following variables in my var file:
repo_type:
hosted:
data:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
proxy:
data:
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
group:
data:
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
I want to configure a task to to loop over (hosted,proxy and group) and body over data dict.
Here is the task:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.key}}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item }}"
status_code: 201
no_log: no
with_dict: "{{ repo_type}}"
I have tried with_items, with_dict and with_nested but nothing helped.
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'data'
Any help would be appreciated!
If your goal is to loop over the contents of the data keys as a flat list, you could do it like this:
- debug:
msg: "repo {{ item.name }} write_policy {{ item.storage.write_policy }}"
loop_control:
label: "{{ item.name }}"
loop: "{{ repo_type | json_query('*.data[]') }}"
That uses a JMESPath expression to get the data key from each
top-level dictionary, and then flatten the resulting nested list. In
other words, it transforms you original structure into:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
When run using your example data, this produces as output:
TASK [debug] *********************************************************************************************************
ok: [localhost] => (item=hosted_repo1) => {
"msg": "repo hosted_repo1 write_policy allow_once"
}
ok: [localhost] => (item=hosted_repo2) => {
"msg": "repo hosted_repo2 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo1) => {
"msg": "repo proxy_repo1 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo2) => {
"msg": "repo proxy_repo2 write_policy allow_once"
}
ok: [localhost] => (item=group_repo1) => {
"msg": "repo group_repo1 write_policy allow_once"
}
ok: [localhost] => (item=group_repo2) => {
"msg": "repo group_repo2 write_policy allow_once"
}
If you're trying to do something else, please update your question so
that it clearly shows the values you expect for each iteration of your
loop.
As reported by #larsk, you actually didn't manage to explain clearly how you are trying to loop over your data and what your api call is actually expecting.
But you are in luck this time since I have messed around with Nexus quite a bit (and I think I actually recognize those variable names and overall task layout)
The Nexus Repository POST /v1/repositories/pypi/[hosted|proxy|group] API endpoints are expecting one call for each of your repos in data. To implement your requirement you need to loop over the keys in repo_type to select the appropriate endpoint and then loop again over each element in data to send the repo definition to create.
This is actually possible combining a dict2items and subelements filters in your loop as in the below playbook (not directly tested).
The basics of the transformation are as follow:
transform your dict to a key/value list using dict2items e.g. (shortened example)
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
[...]
use the subelements filter to combine each top element with each element in value.data e.g.:
- # <- This is the entry for first repo i.e. `item` in your loop
- key: hosted # <- this is the start of the corresponding top element i.e. `item.0` in your loop
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 1> # <- this is the sub-element i.e. `item.1` in your loop
- # <- this is the entry for second repo
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 2>
[...]
Following one of your comments and from my experience, I added to my example an explicit json serialization of the repo definition using the to_json filter.
Putting it all together this gives:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.0.key }}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item.1 | to_json }}"
status_code: 201
no_log: no
loop: "{{ repo_type | dict2items | subelements('value.data') }}"
I need to get JMX metrics from the Hazelcast product. I have created a Logstash process that connects to the JMX port. This process has to read a json where is the information of the hostname, port, cluster, environment, etc of Hazelcast JMX. I need to deploy on the Logstash machines the json file for each Hazelcast machine / port. In this case there are three Hazelcast machines and a total of 6 processes with different ports.
Example data:
Hazelcast Hostnames: hazelcast01, hazelcast02, hazelcast03
Hazelcast Ports: 6661, 6662, 6663, 6664, 6665
Logstash Hostnames: logstash01, logstash02, logstash03
Dictionary of Hazelcast information in Ansible:
logstash_hazelcast_jmx:
- hazelcast_pre:
name: hazelcast_pre
port: 15554
cluster: PRE
- hazelcast_dev:
name: hazelcast_dev
port: 15555
cluster: DEV
Example of task in Ansible:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ item }}_{{ item.value.cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_dict:
- "{{ groups['HAZELCAST'] }}"
- logstash_hazelcast_jmx
The final result should be as follows:
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_DEV.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast01_PRE.json
/opt/logstash/jmx/hazelcast/hazelcast_jmx_hazelcast02_DEV.json
...
Here is an example of the json content:
{
"host" : "{{ hostname of groups['HAZELCAST' }}",
"port" : {{ item.value.port }},
"alias" : "{{ hostname of groups['HAZELCAST' }}_{{ item.value.cluster }}",
"queries" : [
{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=XXX,name=YYY",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
} ,{
"object_name" : "com.hazelcast:instance=_hz_{{ item.value.cluster }},type=IMap,name=user",
"attributes" : [ "size", "localHits" ],
"object_alias" : "Hazelcast_map"
}
]
}
I think the problem I have is that the with_dict option does not allow using a listing of inventory hosts and a dictionary.
How can I get this generation of json files for each machine / port?
If you run your playbook against logstash hosts, you can use with_nested:
---
- hosts: logstash_hosts
tasks:
- name: Deploy HAZELCAST JMX config
template:
src: "hazelcast_jmx.json.j2"
dest: "{{ logstash_directory_jmx_hazelcast }}/hazelcast_jmx_{{ helper_host }}_{{ helper_cluster }}.json"
owner: "{{ logstash_system_user }}"
group: "{{ logstash_system_group }}"
mode: 0640
with_nested:
- "{{ groups['HAZELCAST'] }}"
- "{{ logstash_hazelcast_jmx }}"
vars:
helper_host: "{{ item.0 }}"
helper_cluster: "{{ item.1.cluster }}"
helper_port: "{{ item.1.port }}"
I also used helper variables with more meaningful names. You should also modify your template with either helper vars or item.0, item.1 – where item.0 is a host from HAZELCAST group, and item.1 is an item from logstash_hazelcast_jmx list.
I don't understand properly how to use correctly template_parameters parameter (http://docs.ansible.com/ansible/cloudformation_module.html)
So, looks like I could use this param to override some param in template. Very simple configuration is
sandbox_cloudformation.yml
---
- name: Sandbox CloudFormation
hosts: localhost
connection: local
gather_facts: false
tasks:
- name: Launch Ansible CloudFormation Stack
cloudformation:
aws_access_key: "{{ aws_access_key }}"
aws_secret_key: "{{ aws_secret_key }}"
stack_name: "{{ aws_default_cloudformation_stack_name }}"
state: "present"
region: "{{ aws_default_region }}"
disable_rollback: true
template: "files/cloudformation.yml"
args:
template_parameters:
GroupDescription: "Sandbox Security Group"
cloudformation.yml
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: "DEMO"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: '80'
ToPort: '80'
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: '22'
ToPort: '22'
CidrIp: 0.0.0.0/0
But I got next error:
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "msg": "Parameter values specified for a template which does not require them."}
In addition, I got this error, if try to use, for example
template_parameters:
KeyName: "{{ aws_default_keypair }}"
InstanceType: "{{ aws_default_instance_type }}"
Also, please advice with best approach to use cloudformation module for Ansible. Maybe best way is generate cloud formation template and in the next step use it? Like ..
- name: Render Cloud Formation Template
template: src=cloudformation.yml.j2 dest=rendered_templates/cloudformation.yml
- name: Launch Ansible CloudFormation Stack
cloudformation:
template: "rendered_templates/cloudformation.yml"
Thanks in advance!
You can use template_parameters to pass parameters to a CloudFormation template. In the template you'll refer to the parameters using Ref. In your case:
playbook:
...
args:
template_parameters:
GroupDescriptionParam: "Sandbox Security Group"
...
template:
...
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription:
Ref: GroupDescriptionParam
...
See
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group.html#w1ab2c19c12d282c19
for examples of AWS SecurityGroup CloudFormation templates.
Instead of using template_parameters arguments in Ansible cloudformation module, I have found it handy to create a Jinja2 template, convert it to a CloudFormation template using Ansible template module, just like you suggested in the end of your question. Using this approach you can then omit args and template_parameters in cloudformation module call.
For example:
- name: Generate CloudFormation template
become: no
run_once: yes
local_action:
module: template
src: "mycloudformation.yml.j2"
dest: "{{ cf_templ_dir }}/mycloudformation.yml"
tags:
- cloudformation
- cf_template
- name: Deploy the CloudFormation stack
become: no
run_once: yes
local_action:
module: cloudformation
stack_name: "{{ cf_stack_name }}"
state: present
region: "{{ ec2_default_region }}"
template: "{{ cf_templ_dir }}/mycloudformation.yml"
register: cf_result
tags:
- cloudformation
- name: Show CloudFormation output
run_once: yes
become: no
debug: var=cf_result
tags:
- cloudformation
And mycloudformation.yml.j2:
---
AWSTemplateFormatVersion: '2010-09-09'
Description: Sandbox Stack
Resources:
SandboxSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: {{ group_desc_param_defined_somewhere_in_ansible }}
...
I am creating an ansible playbook using version 2.0.0-0.3.beta1
I'd like to get the subnet id after i create a subnet. I'm refering the official ansible docs : http://docs.ansible.com/ansible/ec2_vpc_route_table_module.html
- name: Create VPC Public Subnet
ec2_vpc_subnet:
state: present
resource_tags: '{"Name":"{{ prefix }}_subnet_public_0"}'
vpc_id: "{{ vpc.vpc_id }}"
az: "{{ az0 }}"
cidr: 172.16.0.0/24
register: public
- name: Create Public Subnet Route Table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
tags:
Name: Public
subnets:
- "{{ public.subnet_id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
after running the playbook i received following error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'dict object' has no attribute 'subnet_id'"}
Try using: public.subnet.id instead of public.subnet-id
Its useful to debug by running this task:
- debug: msg="{{ public }}"