I am not sure what I'm doing wrong here. This is the uri task i have:
- name: Waiting for Entity Submission to complete
uri:
url: https://{{ endpoint_ip }}:9440/{{ endpoint_api_task }}/{{ intent_status.json.metadata.uuid }}
url_username: "{{ endpoint_api_username }}"
url_password: "{{ endpoint_api_password }}"
validate_certs: false
force_basic_auth: true
return_content: true
follow_redirects: all
method: 'GET'
register: v3_progress
debugger: always
I m not able to read the return value v3_progress
In debugger - I see below key error:
[PE] TASK: nat : Waiting for Entity Submission to complete (debug)> p task_vars['v3_progress']
***KeyError:KeyError('v3_progress',)
If I debug the task args - all arguments seem accurate enough and the task itself gets a 200:
E] TASK: nat : Waiting for Entity Submission to complete (debug)> p task.args
{'_ansible_check_mode': False,
'_ansible_debug': False,
'_ansible_diff': False,
'_ansible_keep_remote_files': False,
'_ansible_module_name': u'uri',
'_ansible_no_log': False,
'_ansible_remote_tmp': u'~/.ansible/tmp',
'_ansible_selinux_special_fs': ['fuse',
'nfs',
'vboxsf',
'ramfs',
'9p',
'vfat'],
'_ansible_shell_executable': u'/bin/sh',
'_ansible_socket': None,
'_ansible_string_conversion_action': u'warn',
'_ansible_syslog_facility': u'LOG_USER',
'_ansible_tmpdir': u'/home/nutanix/.ansible/tmp/ansible-tmp-1586646620.05-143519614604489/',
'_ansible_verbosity': 0,
'_ansible_version': '2.9.6',
u'follow_redirects': u'all',
u'force_basic_auth': True,
u'method': u'GET',
u'return_content': True,
u'url': u'https://xxxx/v3/images/ce405ddf-91f1-4399-b5b5-c6f12f1da79c',
u'url_password': u'xxxx',
u'url_username': u'admin',
u'validate_certs': False}
register variable will not be available in the debugger. The value for the register variable v3_progress gets populated only when the task returns and will be available for usage only in the subsequent tasks.
To view the result in the debugger, use p result._result.
You will be able to read the value in v3_progress in a subsequent task like,
- name: print uri result
debug: msg="{{ v3_progress }}"
Related
I started using the role argument validation which was introduced in Ansible 2.11. For example an easy variable of type string looks like this in meta/argument_specs.yml:
argument_specs:
main:
short_description: "Checking firewall global policies"
options:
openwrt_firewall_default_forward:
type: "str"
choices:
- "ACCEPT"
- "REJECT"
- "DROP"
But I have more complex variables than that, for example dicts. This is what the variables can look like:
openwrt_firewall_zoneshost:
MGMT:
forward: "REJECT"
input: "ACCEPT"
output: "ACCEPT"
#log: 1
interfaces:
- "mgmt"
SECURE:
forward: "ACCEPT"
input: "ACCEPT"
output: "ACCEPT"
#log: 1
interfaces:
- "secure"
or like this:
openwrt_firewall_ruleshost:
"syslog logrx1":
src: "*"
dest: "RXFORELLE"
dest_ip:
- "{{ hostvars['alsomyhost.mydomain.de']['primary_ip6'] }}"
dest_port: "514"
target: "ACCEPT"
"MGMT myhost":
src: "MGMT"
dest: "RXFORELLE"
dest_ip:
- "{{ hostvars['myhost.mydomain.de']['primary_ip6'] }}"
proto: "tcp"
dest_port: "22"
target: "ACCEPT"
I would like to be able to validate those, meaning. Some of the attributes are mandatory like src or destination. Some others are optional, but if given I would like to make sure they are of the correct type.
Honestly I don't understand the documentation with reference to this very problem having dict variables. Could someone please outline what the validation structure would look like for those type of variables.
Reference:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#role-argument-validation
You can repeat options: in the definition of an option, these are sometimes called "suboptions", and within there you can use all the same fields to define what that dict should contain.
argument_specs:
main:
short_description: "Checking firewall global policies"
options:
openwrt_firewall_ruleshost:
type: "dict"
required: true
options:
src:
required: true
type: str
I'm using Ansible
os_project_facts module to gather admin project id of OpenStack.
This is the ansible_fact log:
ansible_facts:
openstack_projects:
- description: Bootstrap project for initializing the cloud.
domain_id: default
enabled: true
id: <PROJECT_ID>
is_domain: false
is_enabled: true
location:
cloud: envvars
project:
domain_id: default
domain_name: null
id: default
name: null
region_name: null
zone: null
name: admin
options: {}
parent_id: default
properties:
options: {}
tags: []
tags: []
Apparently, this is not a dictionary, and I can't get openstack_projects.id since it is not a dictionary. How can I retrieve PROJECT_ID and use it in other tasks?
Since the openstack_projects facts contains single list element with a dictionary, we can use the array indexing method to get the id, i.e. openstack_projects[0]['id'].
You can use it directly, or use something like set_fact:
- name: get the project id
set_fact:
project_id: "{{ openstack_projects[0]['id'] }}"
I'm trying to develop a playbook were I have the following variable.
disk_vars:
- { Unit: C, Size: 50 }
- { Unit: D, Size: 50 }
With the variables defined on the playbook there is no problem but when I try to use a texarea survey on Ansible Tower I cannot manage to parse them as list of dictionaries.
I tried adding to the survey the following two lines which are already on yaml format.
- { Unit: C, Size: 50 }
- { Unit: D, Size: 50 }
And on my vars section I use test_var: "{{ test_var1.split('\n') }} which converts the output into a two line string. Without the split is just a single line string.
I could make my playbook work with a simple dictionary like
dict1: {{ Unit: C, Size: 50 }}
but I'm having issues parsing it as well.
EDIT
Changing it to the following as suggested by mdaniels works.
- set_fact:
test_var: "{{ test_var1 | from_yaml }}"
- name: test
debug: msg=" hostname is {{ item.Unit }} and {{ item.Size }}"
with_items:
- "{{ test_var }}"
I'm trying to figure a way to clear-up the data input as asking users to respect the format is not a very good idea.
tried changing the input date to the following but I could not figure out how to format that into a list of dictionaries.
disk_vars:
Unit: C, Size: 50
Unit: D, Size: 50
I tried with the following piece of code
- set_fact:
db_list: >-
{{ test_var1.split("\n") | select |
map("regex_replace", "^", "- {") |
map("regex_replace", "$", "}") |
join("\n") }}
But is putting it all on a single line.
"db_list": "- {dbid: 1, dbname: abc\ndbid: 2, dbname: xyz} "
I have tried to play with it but could not manage to make it work.
I believe you were very close; instead of "{{ test_var1.split('\n') }}" I believe you can just feed it to the from_yaml filter:
- set_fact:
test_var1: '{{ test_var1 | from_yaml }}'
# this is just to simulate the **str** that you will receive from the textarea
vars:
test_var1: "- { Unit: C, Size: 50 }\n- { Unit: D, Size: 50 }\n"
- debug:
msg: and now test_var1[0].Unit is {{ test_var1[0].Unit }}
I faced a similar dilemma, i.e. that I was bound to the survey format(s) available, and I was forced to use mdaniels suggested solution above with sending the data as text and then later parse it from YAML . Problem was however that controlling the format of the input (i.e. a YAML-string inside the text) would probably cause a lot of headache/errors, just like you describe.
Maybe you really need to use the Survey, but in my case I was more interested of calling the Job Template using the Tower REST API. For some reason I thought I then had to have a survey with all parameters defined. But it turned out I was wrong, when having a survey I was not able to provide dictionaries as input data (in the extra_vars). However, when removing the Survey, and also (not sure if required or not) enabling "Extra Variables -> prompt on launch", then things started to work!! Now I can provide lists / dictionaries as input to my Templates when calling them using REST API POST calls, see example below:
{
"extra_vars": {
"p_db_name": "MYSUPERDB",
"p_appl_id": "MYD32",
"p_admin_user": "myadmin",
"p_admin_pass": "mysuperpwd",
"p_db_state": "present",
"p_tablespaces": [
{
"name": "tomahawk",
"size": "10M",
"bigfile": true,
"autoextend": true,
"next": "1M",
"maxsize": "20M",
"content": "permanent",
"state": "present"
}
],
"p_users": [
{
"schema": "myschema",
"password": "Mypass123456#",
"default_tablespace": "tomahawk",
"state": "present",
"grants": "'create session', 'create any table'"
}
]
}
}
I am using a list of dict to declare some website to configure on web server.
There are some calculated properties I don't want to redeclare each time I need it, so before using it, I made a loop declaring all calculated/missing properties to get a proper list of websites (dict).
Here is what I am doing for now
- name: Set server vhosts
set_fact:
websites: "{{ websites|default([]) + [item | combine({'vhost': '200-' + item.name, 'path': path_vhosts + '/' + item.name, 'domain': app_hosts[item.name]})] }}"
with_items: "{{ vhosts }}"
But this is very limited, and will be unreadable if there is too much property to set.
How could I improve it to build it properly please ?
In a the best way, there is no variable vhosts, we just use websites and replace it.
Q: "Will be unreadable if there is too much property to set. How could I improve it to build it properly?"
A: It's built properly. The formatting might help. See below
- name: Set server vhosts
set_fact:
websites: "{{ websites|default([]) + [item|
combine({'vhost': '200-' + item.name,
'path': path_vhosts + '/' + item.name,
'domain': app_hosts[item.name]
}) ] }}"
loop: "{{ vhosts }}"
I'm currently using the route53_facts module on a project. I have 250 record sets in one hosted zone. I'm having difficulty with listing all record sets in that zone. The Route 53 API works by returning pages of maximum 100 records at a time. In order to retrieve the next page, you must pass the NextRecordName response value to the route53_facts module's start_record_name: field (pretty straightforward).
The issue I'm having specifically is getting Ansible to do this. Presumably one would do this using a loop, e.g. in pseudocode:
start
get 100 records
do until response does not contain NextRecordName:
get 100 records (start_record_name=NextRecordName)
end
In Ansible, I have written the below task to do this:
- block:
- name: List record sets in a given hosted zone
route53_facts:
query: record_sets
hosted_zone_id: "/hostedzone/ZZZ1111112222"
max_items: 100
start_record_name: "{{ record_sets.NextRecordName | default(omit) }}"
register: record_sets
until: record_sets.NextRecordName is not defined
when: "'{{ hosted_zone['Name'] }}' == 'test.example.com.'"
...however, this does not work as expected. Instead of continuously paging through responses until no more records are left, it repeatedly returns the first 100 records ("the first page").
As I can see from the Ansible debug output, start_record_name: is repeatedly null:
"attempts": 2,
"changed": false,
"invocation": {
"module_args": {
"aws_access_key": null,
"aws_secret_key": null,
"change_id": null,
"delegation_set_id": null,
"dns_name": null,
"ec2_url": null,
"health_check_id": null,
"health_check_method": "list",
"hosted_zone_id": "/hostedzone/ZZZ1111112222",
"hosted_zone_method": "list",
"max_items": "100",
"next_marker": null,
"profile": null,
"query": "record_sets",
"region": null,
"resource_id": null,
"security_token": null,
"start_record_name": null,
"type": null,
"validate_certs": true
}
},
...my guess is that the | default(omit) filter is always being executed. In other words, record_sets.NextRecordName is never initialized at this point in the task.
I'm hoping somebody can assist me in getting Ansible to return all records from a zone in Route 53. I think I've gotten tangled up in Ansible's looping behavior. Thanks!
Caveat this with "as best I can tell:"
To answer your question, it actually seems that until: and register: do not interact in the same way that when: and register: do. The best explanation I have is that until: behaves like a database transaction: it rolls back the register: assignment if the conditional is false, meaning that when the body of the until: task is tried again, it uses the same parameters as the first time. The only thing which keeps an until: block from being an infinite loop is the retries: value.
So, in your specific case, I think this will do the job:
- name: initial record_set
route53_facts:
# bootstrap so the upcoming "when:" will evaluate correctly
register: record_facts
- set_fact:
# capture the initial answer
records0: '{{ record_facts.ResourceRecordSets }}'
- name: rest of them
route53_facts:
start_record_name: '{{ record_facts.NextRecordName }}'
register: record_facts
when: record_facts.NextRecordName | default("")
with_sequence: count=10
- set_fact:
all_records: >-
{{ record0 + (record_facts.results |
selectattr("ResourceRecordSets", "defined") |
map(attribute="ResourceRecordSets") | list) }}
The with_sequence: is a hack because loop: (for which with_* is syntatic sugar) needs a list of items over which to iterate, but given that the responses that come back without NextRecordName will cause the when: to fail, skipping them, makes the (in your case) 3 through 10 items resolve almost immediately.
Then you just need to pull out the actual response data from the now list of route53_facts: replies, and glue them to the initial one to get the complete list.
Having said all of that, I am now convinced that route53_facts: (and any other AWS module that pushes the burden of that iteration into the playbook) behavior is a bug. The module caller already has a max_items: available to them, but it's an implementation detail that that any value can't be larger than some random pagination cut-off.