I'm trying to collect the VMware datastore info from the ansible module community.vmware.vmware_datastore_info with the below task and debug the same:
- name: Gather info from datacenter about specific datastore
community.vmware.vmware_datastore_info:
hostname: '{{ vcenter_hostname }}'
username: '{{ username }}'
password: '{{ password }}'
datacenter_name: '{{ datacenter_name }}'
name: ENV04
validate_certs: False
delegate_to: localhost
register: info
- debug:
msg: "{{ info }}"
The debug task is returning the below output:
TASK [Gather info from datacenter about specific datastore] ************************************************************************************************************
ok: [172.17.92.2 -> localhost]
TASK [debug] ***********************************************************************************************************************************************************
ok: [172.17.92.2] => {
"msg": {
"changed": false,
"datastores": [],
"failed": false
}
}
Rather than the one mentioned in the ansible-doc for this module:
RETURN VALUES:
- datastores
metadata about the available datastores
returned: always
sample:
- accessible: true
capacity: 5497558138880
datastore_cluster: datacluster0
freeSpace: 4279000641536
maintenanceMode: normal
multipleHostAccess: true
name: datastore3
nfs_path: /vol/datastore3
nfs_server: nfs_server1
provisioned: 1708109410304
type: NFS
uncommitted: 489551912960
url: ds:///vmfs/volumes/420b3e73-67070776/
Can anyone suggest what was the issue, as I was getting the expected output before and now it is not coming.
Related
I have ansible playbook where I want purge all queues in rabbitmq
- name: Get queues
uri:
url: http://localhost:15672/api/queues
method: GET
return_content: yes
status_code: 200,404
body_format: json
register: result
- name: Just the Names
debug: msg="{{ result.json | json_query(jmesquery)}}"
vars:
jmesquery: "[*].name"
register: queue_name
Return list queues
TASK [demo : Just the Names] ***
ok: [test] => {
"msg": [
"test1",
"test2"
]
}
How to do a loop for queues
- name: delete queues
uri:
url: http://localhost:15672/api/queues/aaa/{{ queue_name }}/contents"
method: DELETE
You could try:
- name: delete queues
uri:
url: "http://localhost:15672/api/queues/aaa/{{ item }}/contents"
method: DELETE
loop: "{{ result.json | json_query(jmesquery)}}"
vars:
jmesquery: "[*].name"
I have the following variables in my var file:
repo_type:
hosted:
data:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
proxy:
data:
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
group:
data:
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
I want to configure a task to to loop over (hosted,proxy and group) and body over data dict.
Here is the task:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.key}}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item }}"
status_code: 201
no_log: no
with_dict: "{{ repo_type}}"
I have tried with_items, with_dict and with_nested but nothing helped.
The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'data'
Any help would be appreciated!
If your goal is to loop over the contents of the data keys as a flat list, you could do it like this:
- debug:
msg: "repo {{ item.name }} write_policy {{ item.storage.write_policy }}"
loop_control:
label: "{{ item.name }}"
loop: "{{ repo_type | json_query('*.data[]') }}"
That uses a JMESPath expression to get the data key from each
top-level dictionary, and then flatten the resulting nested list. In
other words, it transforms you original structure into:
- name: hosted_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: hosted_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: proxy_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo1
online: true
storage:
blobstarage: default
write_policy: allow_once
- name: group_repo2
online: true
storage:
blobstarage: default
write_policy: allow_once
When run using your example data, this produces as output:
TASK [debug] *********************************************************************************************************
ok: [localhost] => (item=hosted_repo1) => {
"msg": "repo hosted_repo1 write_policy allow_once"
}
ok: [localhost] => (item=hosted_repo2) => {
"msg": "repo hosted_repo2 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo1) => {
"msg": "repo proxy_repo1 write_policy allow_once"
}
ok: [localhost] => (item=proxy_repo2) => {
"msg": "repo proxy_repo2 write_policy allow_once"
}
ok: [localhost] => (item=group_repo1) => {
"msg": "repo group_repo1 write_policy allow_once"
}
ok: [localhost] => (item=group_repo2) => {
"msg": "repo group_repo2 write_policy allow_once"
}
If you're trying to do something else, please update your question so
that it clearly shows the values you expect for each iteration of your
loop.
As reported by #larsk, you actually didn't manage to explain clearly how you are trying to loop over your data and what your api call is actually expecting.
But you are in luck this time since I have messed around with Nexus quite a bit (and I think I actually recognize those variable names and overall task layout)
The Nexus Repository POST /v1/repositories/pypi/[hosted|proxy|group] API endpoints are expecting one call for each of your repos in data. To implement your requirement you need to loop over the keys in repo_type to select the appropriate endpoint and then loop again over each element in data to send the repo definition to create.
This is actually possible combining a dict2items and subelements filters in your loop as in the below playbook (not directly tested).
The basics of the transformation are as follow:
transform your dict to a key/value list using dict2items e.g. (shortened example)
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
[...]
use the subelements filter to combine each top element with each element in value.data e.g.:
- # <- This is the entry for first repo i.e. `item` in your loop
- key: hosted # <- this is the start of the corresponding top element i.e. `item.0` in your loop
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 1> # <- this is the sub-element i.e. `item.1` in your loop
- # <- this is the entry for second repo
- key: hosted
value:
data:
- <repo definition 1>
- <repo definition 2>
- <repo definition 2>
[...]
Following one of your comments and from my experience, I added to my example an explicit json serialization of the repo definition using the to_json filter.
Putting it all together this gives:
- name: Create pypi hosted Repos
uri:
url: "{{ nexus_api_scheme }}://{{ nexus_api_hostname }}:{{ nexus_api_port }}\
{{ nexus_api_context_path }}{{ nexus_rest_api_endpoint }}/repositories/pypi/{{ item.0.key }}"
user: "{{ nexus_api_user }}"
password: "{{ nexus_default_admin_password }}"
headers:
accept: "application/json"
Content-Type: "application/json"
body_format: json
method: POST
force_basic_auth: yes
validate_certs: "{{ nexus_api_validate_certs }}"
body: "{{ item.1 | to_json }}"
status_code: 201
no_log: no
loop: "{{ repo_type | dict2items | subelements('value.data') }}"
I'm trying to get Ansible to create a launch configuration (using ec2_lc) and auto scaling group (ec2_asg). My playbook creates the launch configuration just fine, but the ASG is creating an error:
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "No launch config found with name {'name': 'dockerLC', 'instance_type': 't2.micro', 'image_id': 'ami-e97c548c', 'changed': False, 'failed': False, 'arn': None, 'created_time': 'None', 'security_groups': ['sg-e0c0de88'], 'result': {'image_id': 'ami-e97c548c', 'key_name': 'ansible-keys', 'launch_configuration_name': 'dockerLC', 'ebs_optimized': False, 'instance_type': 't2.micro', 'created_time': 'None', 'associate_public_ip_address': True, 'instance_monitoring': False, 'placement_tenancy': 'default', 'security_groups': ['sg-e0c0de88'], 'block_device_mappings': []}}"}
I'm unsure why this error is being thrown because I certainly have a dockerLC launch configuration in the AWS EC2 web interface. My playbook is below.
tasks:
- name: Create Launch Configuration
local_action:
module: ec2_lc
name: dockerLC
region: '{{ aws_region }}'
image_id: '{{ ami_id }}'
key_name: '{{ key_name }}'
security_groups: '{{ security_group_id }}'
# security_groups: '{{ security_group }}'
instance_type: '{{ instance_type }}'
assign_public_ip: True
vpc_id: '{{ vpc }}'
register: lc
# - pause:
# seconds: 10
- name: Create Auto Scaling Group
local_action:
module: ec2_asg
name: dockerASG
region: '{{ aws_region }}'
launch_config_name: '{{ lc }}'
min_size: 3
max_size: 5
wait_for_instances: True
availability_zones: ['us-east-2a', 'us-east-2b', 'us-east-2c']
vpc_zone_identifier: ' {{ vpc_zones }} '
tags:
- function: docker
billingGroup: Work
When you register a variable for tasks the variable is usually a hash containing more information than just a string. In this case, there is a hash containing:
{
'name':'dockerLC',
'instance_type':'t2.micro',
'image_id':'ami-e97c548c',
'changed':False,
'failed':False,
'arn':None,
'created_time':'None',
'security_groups':[
'sg-e0c0de88'
],
'result':{
'image_id':'ami-e97c548c',
'key_name':'ansible-keys',
'launch_configuration_name':'dockerLC',
'ebs_optimized':False,
'instance_type':'t2.micro',
'created_time':'None',
'associate_public_ip_address':True,
'instance_monitoring':False,
'placement_tenancy':'default',
'security_groups':[
'sg-e0c0de88'
],
'block_device_mappings':[
]
}
}
But launch_config_name wants just the name as a string. In this case, you want to reference launch_config_name: '{{ lc.name }}' which is the name of the launch configuration. You could likely also use the ariable lc.result.launch_configuration_name as well (as an example).
Here is the Ansible source code where with_items is linked to a dictionary type variable named "grafana_datasource_body". I always get an error message "ith_items expects a list or a set". Here is the source code and related output.
Ansible source code :
- name: Configure datasource creation http body
set_fact:
grafana_datasource_body:
name: "{{Ass_grafana_Datasource_Name}}"
type: "{{Ass_grafana_Datasource_Type}}"
isDefault: "{{Ass_grafana_Datasource_IsDefault}}"
access: "{{Ass_grafana_Datasource_Access}}"
basicAuth: "{{Ass_grafana_Datasource_BasicAuth}}"
url: "{{InfluxDB_Server}}:{{Ass_grafana_Datasource_Http_Port}}"
database: "{{Ass_grafana_Datasource_Db}}"
username: "{{Ass_grafana_Datasource_DbUser}}"
password: "{{Ass_grafana_Datasource_DbPassword}}"
when: Ass_grafana_Datasource_Name not in grafana_datasources_output.json|map(attribute='name')|list
- debug: msg="Datasource creation http body = {{grafana_datasource_body}}"
when: Ass_grafana_Datasource_Name not in grafana_datasources_output.json|map(attribute='name')|list
# Create non existing datasources
- name: datasource > Create datasource
register: grafana_datasources_create_output
failed_when: "'Datasource added' not in grafana_datasources_create_output|to_json"
uri:
url: "http://127.0.0.1:{{Ass_grafana_Listen_Http_Port}}/api/datasources"
method: POST
HEADER_Content-Type: application/json
body: '{{ item|to_json }}'
force_basic_auth: true
user: "{{Ass_grafana_Api_User}}"
password: "{{Ass_grafana_Api_Password}}"
status_code: 200
with_items: "{{grafana_datasource_body}}"
when: item.name not in grafana_datasources_output.json|map(attribute='name')|list
Output
TASK: [./grafana/grafana_Role | Configure datasource creation http body] ******
<10.0.20.7> ESTABLISH CONNECTION FOR USER: centos
ok: [10.0.20.7] => {"ansible_facts": {"grafana_datasource_body": {"access": "proxy", "basicAuth": "false", "database": "webcom-int", "isDefault": "true", "name": "MonTonton", "password": "webcom", "type": "influxdb", "url": "http://localhost:8086", "username": "webcom"}}}
TASK: [./grafana/grafana_Role | debug msg="Datasource creation http body = {{grafana_datasource_body}}"] ***
<10.0.20.7> ESTABLISH CONNECTION FOR USER: centos
ok: [10.0.20.7] => {
"msg": "Datasource creation http body = {'username': u'webcom', 'name': u'MonTonton', 'database': u'webcom-int', 'url': u'http://localhost:8086', 'basicAuth': u'false', 'access': u'proxy', 'password': u'webcom', 'type': u'influxdb', 'isDefault': u'true'}"
}
TASK: [./grafana/grafana_Role | datasource > Create datasource] ***************
fatal: [10.0.20.7] => with_items expects a list or a set
FATAL: all hosts have already failed -- aborting
Any real reason to use with_items loop here?
Replace
with_items: "{{grafana_datasource_body}}"
with
with_items: "{{ [grafana_datasource_body] }}"
this will work.
But you can use body: '{{ grafana_datasource_body|to_json }}' and don't use with_items.
I am creating an ansible playbook using version 2.0.0-0.3.beta1
I'd like to get the subnet id after i create a subnet. I'm refering the official ansible docs : http://docs.ansible.com/ansible/ec2_vpc_route_table_module.html
- name: Create VPC Public Subnet
ec2_vpc_subnet:
state: present
resource_tags: '{"Name":"{{ prefix }}_subnet_public_0"}'
vpc_id: "{{ vpc.vpc_id }}"
az: "{{ az0 }}"
cidr: 172.16.0.0/24
register: public
- name: Create Public Subnet Route Table
ec2_vpc_route_table:
vpc_id: "{{ vpc.vpc_id }}"
region: "{{ region }}"
tags:
Name: Public
subnets:
- "{{ public.subnet_id }}"
routes:
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
after running the playbook i received following error:
fatal: [localhost]: FAILED! => {"failed": true, "msg": "ERROR! 'dict object' has no attribute 'subnet_id'"}
Try using: public.subnet.id instead of public.subnet-id
Its useful to debug by running this task:
- debug: msg="{{ public }}"