Select item from simple list with jmespath query - ansible

I have a simple list with strings. I want to select the item in the list based on the presence of a predefined string 'fixedaddress'. I created a query and I expect the query to return the first item/string from the example input list, but it is retruning the following error:
fatal: [localhost]: FAILED! => {"msg": "JMESPathError in json_query filter plugin:\nIn function contains(), invalid type for value: fixedaddress/ZG5zLmZpeGVkX2FkZHJlc3MkMTAuMjM5LjEyLja, expected one of: ['array', 'string'], received: \"unknown\""}
Input (strings are shortened because of sensitive info, rest of the string contains another : and /, not sure if that is related to the error):
{
[
"fixedaddress/ZG5zLmZpeGVkX2FkZHJlc3MkMTAuMjM5LjEyLja",
"record:a/ZG5zLmJpbmRfYSQuMjMubmwubW9kLG1hcmMwMDAxLW1"
]
}
Ansible code:
- name: "Delete IP Record: Task 2.2a: Filter Results."
vars:
jmesquery: "[?contains(#,`fixedaddress`)]"
set_fact:
ip_record_ref: "{{ ip_record_refs | json_query(jmesquery) }}"
when: ip_record_refs | length > 1

I found the solution. Apparently the Ansible version we are using has a bug. See also:
https://github.com/ansible/ansible/issues/27299
So workaround was to use the following as JMESpath query:
ip_record_ref: "{{ ip_record_refs | to_json | from_json | json_query(jmesquery) }}"

Related

extract data with json_query from jinja2 variable in ansible

I have a variable in the inventory that contains a JSON formatted data.
I want to extract a specific part of the data with json_query.
The variable contains a list of domains with related IP addresses (the JSON is valid):
DOMAIN={"domain1.net": {"addr": ["10.10.10.10", "10.10.10.20"]}, "domain2.net": {"addr": ["172.16.10.1", "172.16.20.1", "172.16.30.1"]}}
With an ansible-playbook, using json_query, I want to extact only the domain2.net IP addresses.
I've used https://api.gopipeline.io/jmespath-tester to validate the JMESPath query.
With this filter: "domain2.net".addr in the jmespath-tester, I got the following (expected) output:
[
"172.16.10.1",
"172.16.20.1",
"172.16.30.1"
]
When I apply the same json_query with this ansible-playbook, I have no output:
Task
---
- name: Extract addr for domain2.net
tags: test
debug: msg="{{ DOMAIN | to_json | from_json | json_query("domain2.net".addr) }}"
Output:
ok: [domain-lab-1] => {
"msg": ""
}
I've tested also another query, by filtering only domain2.net in JMESPath online tester:
https://api.gopipeline.io/jmespath-tester and I get this expected output:
{
"addr": [
"172.16.10.1",
"172.16.20.1",
"172.16.30.1"
]
}
But, when I try to do the same within an Ansible playbook, still no output:
Task
---
- name: Extract addr for domain2.net
tags: test
debug: msg="{{ DOMAIN | to_json | from_json | json_query("domain2.net") }}"
Output:
ok: [domain-lab-1] => {
"msg": ""
}
If I try to print only the DOMAIN var, I can see the whole JSON output.
So, the variable is correctly read.
I'm using ansible 2.9.14.
I've read that the to json|from json from here:
Ansible : filter elements containing string with JMESPath
I'm not sure if is needed in my case, anyway adding or removing them does not make any difference.
You don't need json_query. Simply reference the attribute. You can't use the dot notation because the attribute domain2.net is not a valid name of the variable in Ansible. Put it into the brackets instead. For example
- name: Extract addr for domain2.net
debug:
msg: "{{ DOMAIN['domain2.net'].addr }}"
gives
msg:
- 172.16.10.1
- 172.16.20.1
- 172.16.30.1
Notes
See Referencing key:value dictionary variables.
Any string is a valid key in the YAML dictionary(mapping).
Ansible variable name can only include letters, numbers, and underscores.

json_query filter list of dicts which has value in a list

I have a list of dict mydicts=[{name: foo, data: 1}, {name: foo1, data: 3}, {name: bar, data: 2}] and a list of names names=[foo, foo1, mars, jonh]
I want to create the list of dict only contains names in the list. I know if I want to select single dict I can do jq="[?(name=='foo')]" then mydicts | json_query(jq). However I cannot make the contains version work so far. I need something like [?(contains(names, name))]. Can any one show me an example about how to do this?
It seems json_query correctly get the value name with contains in jq2, but it think it is a variable instead of string. I just found out there is a bug in ansible we need this | to_json | from_json to handle it.
In jq3, it seems it never get the value names I guess I need to wrap both mydicts and names into a dict and pass to jsn_query
- hosts: localhost
gather_facts: False
vars:
mydicts:
- name: foo
data: 1
- name: bar
data: 2
names:
- foo
- foo1
- mars
- jonh
jq1: "[?(name == 'foo')]"
jq2: "[?(contains(name, 'f'))]"
jq3: "[?(contains(names, name))]"
tasks:
- debug:
var: json
- name: JMEPath test equal
debug:
msg: "{{mydicts | json_query(jq1)}}"
- name: JMEPath test str contain str does not work
debug:
msg: "{{mydicts | json_query(jq2)}}"
- name: JMEPath test another list contain str
debug:
msg: "{{mydicts | json_query(jq3)}}"
ignore_errors: yes
TASK [JMEPath test equal] ************************************************************************************************************************************************
ok: [localhost] => {
"msg": [
{
"data": 1,
"name": "foo"
}
]
}
TASK [JMEPath test str contain str] **************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "JMESPathError in json_query filter plugin:\nIn function contains(), invalid type for value: foo, expected one of: ['array', 'string'], received: \"unknown\""}
...ignoring
TASK [JMEPath test another list contain str] *****************************************************************************************************************************
fatal: [localhost]: FAILED! => {"msg": "JMESPathError in json_query filter plugin:\nIn function contains(), invalid type for value: None, expected one of: ['array', 'string'], received: \"null\""}
...ignoring
For this simple example, there is a work around: using jinja filter "{{mydicts | selectattr('name', 'in', names) | list}}". But I still need the json_query functionality for deep nested keys.
JMESPath does not have visibility into the jinja2 variables in the same way that the jinja2 filters do, which is why the reference to names doesn't do what you are expecting: JMESPath thinks that is a field on the current-node named names, rather than a reference to that jinja2 variable
AFAIK you can either construct an artificial input structure just to make both of those pieces of data available to JMESPath (akin to {"names": [...], "mydicts": ...} or you can inline the names list into the JMESPath query:
- debug:
msg: '{{ mydicts | json_query(jq) }}'
vars:
jq: '[? contains(`{{ names | to_json }}`, name) ]'
Q: "[?(contains(names, name))]. Can anyone show me an example of how to do this?"
A: IMHO. It's not possible. The function contains takes the item from the input and searches if the item contains the parameter.
boolean contains(array|string $subject, any $search)
You need a function with the same functionality but switched parameters. Hypothetically,
boolean in(any $search, array|string $subject)
which could be translated to a (hypothetical) query
jq: "[?(in(name, names))]"
There is no such function.
Notes:
selectattr can handle nested attributes

Get the newest dictionary from a list in ansible

I have a list of dictionaries and I want to get the latest one.
reviewing jinja2 docs it seems I should be able to do this:
- set_fact:
t:
- a: 1
b: 2
- a: 3
b: 1
- debug:
msg: "{{ t | max(attribute='a') }}"
But that fails with
fatal: [localhost]: FAILED! => {
"msg": "Unexpected templating type error occurred on ({{ t | max(attribute='a') }}): max() > got an unexpected keyword argument 'attribute'"
}
what is the best whay to do this? Of course my use case is harder than that small demo.
My think looks something like this:
check_mode: no
set_fact:
tgs_info: "{{ tgs_info | default({}) | combine({ item: all_tg_info | to_json | from_json | json_query(query) | max(attribute='target_group_name') }) }}"
vars:
query: "target_groups[?contains(target_group_name, `{{ product }}`) == `true`] | [?ends_with(target_group_name, `{{ tg_suffix }}{{ item }}`) == `true`]"
loop: "{{ projects | selectattr('protocol', 'match', '^HTTP$') | map(attribute='port') | list }}"
The idea is that all_tg_info contains all the autoscaling of my aws account. I filter them and I want to get the latest one based on the name or any other parameter.
I'm kind of stuk here.
Update: As reported in #Hellseher comment below, from ansible 2.11 release, max|min(attribute='someattr') will be available. The answer below will therefore become obsolete for this version and onward. See the corresponding pull request.
reviewing jinja2 docs it seems I should be able to do this...
Even though the jinja2 builtin filters documentation mentions possible params for the max filter, there is an open bug report on ansible side to support those.
In this case you can easily acheive your requirement with the json_query filter. Here is a demo playbook with your simplistic data (as I don't have your more elaborate one...). You can probably adapt this to your actual json_query.
---
- hosts: localhost
gather_facts: false
vars:
t:
- a: 1
b: 2
- a: 3
b: 1
tasks:
- debug:
msg: "{{ t | json_query('max_by(#, &a)') }}"

ansible JEMSPATH errors while parasing ansible facts

i am trying to filter all the strings which contains "RegButton-" from the below ansible facts and use the output as list of items in the next play.
trying to use json_query filter but it is failing with below error
ansible fact
{
"ansible_facts": {
"srcgrpname": [
"RegButton-48773",
"test_vio",
"RegButton-23395",
"RegButton-520859",
"RegButton-743141",
"RegButton-297578",
"RegButton-186156"
]
},
"changed": false
}
playbook entry
- name: "Filter Regbutton policy Names"
set_fact:
srcgrpname2: "{{ resultid1 | json_query(query) }}"
vars:
query: "ansible_facts.srcgrpname[?contains(#, 'RegButton-') == `true`]"
Error that i am receiving.
{
"msg": "JMESPathError in json_query filter plugin:\nIn function contains(), invalid type for value: RegButton-48773, expected one of: ['array', 'string'], received: \"unknown\"",
"_ansible_no_log": false
}
It's possible to use select and regex. For example the tasks below
- set_fact:
srcgrpname2: "{{ ansible_facts.srcgrpname|
select('regex', '^RegButton-(.*)$')|
list }}"
- debug:
var: srcgrpname2
give
"srcgrpname2": [
"RegButton-48773",
"RegButton-23395",
"RegButton-520859",
"RegButton-743141",
"RegButton-297578",
"RegButton-186156"
]
Notes
It's an open issue with contains.
json_query filter fails when using the functions "contains", "starts_with", others #27299
Allow for subclassed types #158
It's reproducible "RegButton-48773, expected one of: ['array', 'string'], received: unknown"
See json format query with contains

When to use from_json filter in Ansible?

When should I use the from_json filter in Ansible?
I found out that using it sometimes has and sometimes have no effect.
Please consider the following example which illustrates the inconsistency I am getting.
Included in reverse order are: the questions - expected result - actual result - the playbook - the data. The data is taken from this question and the playbook is based on this answer.
The question(s):
Why storing the left part (before json_query) of the following expression in a variable and then using json_query on the variable causes the expression to be evaluated differently?
"{{ lookup('file','test.json') | json_query(query) }}"
Why does adding from_json filter alter the results (but does not if processing a variable):
"{{ lookup('file','test.json') | from_json | json_query(query) }}"
Expected result:
Last four tasks should give the same result. Alternatively, last two tasks should give the same result as previous two tasks.
Actual result (last four tasks only):
One task result differs.
TASK [This query is run against lookup value with from_json stored in a variable] ***
ok: [localhost] => {
"msg": [
678
]
}
TASK [This query is run against lookup value without from_json stored in a variable] ***
ok: [localhost] => {
"msg": [
678
]
}
TASK [This query is run directly against lookup value with from_json] **********
ok: [localhost] => {
"msg": [
678
]
}
TASK [This query is run directly against lookup value without from_json - the result is empty - why?] ***
ok: [localhost] => {
"msg": ""
}
The playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- set_fact:
from_lookup_with_from_json: "{{ lookup('file','test.json') | from_json }}"
- set_fact:
from_lookup_without_from_json: "{{ lookup('file','test.json') }}"
- name: Save the lookup value stored in a variable in a file for comparison
copy: content="{{ from_lookup_with_from_json }}" dest=./from_lookup_with_from_json.txt
- name: Save the lookup value stored in a variable in a file for comparison (they are the same)
copy: content="{{ from_lookup_without_from_json }}" dest=./from_lookup_without_from_json.txt
- name: This query is run against lookup value with from_json stored in a variable
debug: msg="{{ from_lookup_with_from_json | json_query(query) }}"
vars:
query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id"
- name: This query is run against lookup value without from_json stored in a variable
debug: msg="{{ from_lookup_without_from_json | json_query(query) }}"
vars:
query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id"
- name: This query is run directly against lookup value with from_json
debug: msg="{{ lookup('file','test.json') | from_json | json_query(query) }}"
vars:
query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id"
- name: This query is run directly against lookup value without from_json - the result is empty - why?
debug: msg="{{ lookup('file','test.json') | json_query(query) }}"
vars:
query: "Foods[].{id: Id, for: (Tags[?Key=='For'].Value)[0]} | [?for=='Tigger'].id"
The data (test.json):
{ "Foods" :
[ { "Id": 456
, "Tags":
[ {"Key":"For", "Value":"Heffalump"}
, {"Key":"Purpose", "Value":"Food"}
]
}
, { "Id": 678
, "Tags":
[ {"Key":"For", "Value":"Tigger"}
, {"Key":"Purpose", "Value":"Food"}
]
}
, { "Id": 911
, "Tags":
[ {"Key":"For", "Value":"Roo"}
, {"Key":"Purpose", "Value":"Food"}
]
}
]
}
json_query requires Python object (dict) as input, if you feed it with string, it gives empty string as result.
You get different result because of Ansible templating engine tricky work.
I should definitely write a post about it on my site...
After evaluating jijna2 expression Ansible try to cast complex types to Python objects (like dict or list). See my other answer.
In your case:
1.
- set_fact:
from_lookup_with_from_json: "{{ lookup('file','test.json') | from_json }}"
from_lookup_with_from_json is a dict, because you manually convert JSON-string from file to dict with from_json filter.
2.
- set_fact:
from_lookup_without_from_json: "{{ lookup('file','test.json') }}"
from_lookup_with_from_json becomes dict, because Ansible converts it when jinja2 expression ends with }}. So from_json is actually unnecessary as the last filter in chain.
3.
debug: msg="{{ lookup('file','test.json') | from_json | json_query(query) }}"
Again, you manually convert JSON-string here. So json_query get dict as input.
4.
debug: msg="{{ lookup('file','test.json') | json_query(query) }}"
In this case you feed JSON-string (not dict) as input to json_query filter. As everything happens inside one jinja2 expression, Ansible doesn't attempt to convert anything in between.
You can also get empty string result with a variable this way:
- set_fact:
from_lookup_force_string: "{{ lookup('file','test.json') | string }}"
In this case from_lookup_force_string will not be converted by Ansible tempating engine, and json_query will give you empty response on it.

Resources