Ansible - How to combine list attributes? - ansible

I have two separate lists. The first is a list (base_list) with basic parameters, and the second is a list (dev_list) with parameters for a specific stand.
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
{
"name": "grafana",
"path": "grafana/conf/nginx.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json"
},
{
"name": "grafana",
"path": "grafana/conf/config.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
My goal is to combine these two lists to get one item.name with several item.path and item.src. Paths that look like this:
"end_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
]
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
]
What would be the best way to do this?

This would probably be easier with a custom Python filter, but here's a solution using Ansible's built-in filters:
---
- hosts: localhost
gather_facts: false
vars:
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
tasks:
- set_fact:
end_list: >-
{{ end_list|default([]) + [
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
]}}
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
- debug:
var: end_list
This was a little tricky to put together, so I'll try to describe the various parts:
The loop uses the zip_longest filter. Given the lists list1=[1, 2, 3] and list2=[11, 12], list1|zip_longest(list2) would produce [[1,11], [2,12], [3,None]] (that is, by default, zip_longest will use None as a fill value if one list is shorter than the other). By setting the fillvalue parameter, we can use a value other than None. In this case...
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
...We're setting the fill value to a dictionary with stub values for path and src, since this makes the rest of the expression easier.
The meat of the solution is of course the set_fact action, which in simplified form looks like:
end_list: "{{ end_list|default([]) + [{...a dictionary...}] }}"
In other words, for each iteration of the loop, this will append a new dictionary to end_list.
We create the dictionary like this:
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
We're using the ternary filter here, which evaluates it's input as a boolean; if it's true, it selects the first argument, otherwise the second. Here we're taking advantage of the fillvalue we passed to the zip_longest filter: if dev_list is shorter than base_list, we'll have some items for which item.1.path and item.1.src are false, causing the ternary filter to select the second value (either item.0.path or item.1.src). In other cases, we build a list by combining the values from each of base_list and dev_list.
The result of running this playbook looks like:
ok: [localhost] => {
"end_list": [
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": false
}
]
}
Let me know if that helps, and whether or not the resulting data structure is what you were looking for. I had to make a few assumptions since your example end_list contained invalid syntax, so I took a guess at what you wanted.

Assuming you had well-formed json and those are properties on the root object, jq is perfectly suited for this. Group the contents of the arrays by name then generate the appropriate result objects.
$ jq '{
end_combine: (
.base_list + .dev_list
| group_by(.name)
| map({ name: .[0].name, path: map(.path), src: map(.src) })
)
}' input.json
{
"end_combine": [
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
}
]
}

Related

Getting the values of keys of Ansible JSON output

I have the following JSON data
{
"docker_compose_init_result": {
"changed": true,
"failed": false,
"services": {
"grafana": {
"docker-compose_grafana_1": {
"cmd": [],
"image": "grafana/grafana:8.5.14",
"labels": {
"com.docker.compose.config-hash": "4d0b5dd6e697a8fe5bf5074192770285e54da43ad32cc34ba9c56505cb709431",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "grafana",
"com.docker.compose.version": "1.29.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.2",
"IPPrefixLen": 16,
"aliases": [
"3d19f54271b2",
"grafana"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:02"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"node-red": {
"docker-compose_node-red_1": {
"cmd": [],
"image": "nodered/node-red:2.2.2",
"labels": {
"authors": "Dave Conway-Jones, Nick O'Leary, James Thomas, Raymond Mouthaan",
"com.docker.compose.config-hash": "5610863d4b28b11645acb5651e7bab174125743dc86a265969788cc8ac782efe",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "node-red",
"com.docker.compose.version": "1.29.2",
"org.label-schema.arch": "",
"org.label-schema.build-date": "2022-02-18T21:01:04Z",
"org.label-schema.description": "Low-code programming for event-driven applications.",
"org.label-schema.docker.dockerfile": ".docker/Dockerfile.alpine",
"org.label-schema.license": "Apache-2.0",
"org.label-schema.name": "Node-RED",
"org.label-schema.url": "https://nodered.org",
"org.label-schema.vcs-ref": "",
"org.label-schema.vcs-type": "Git",
"org.label-schema.vcs-url": "https://github.com/node-red/node-red-docker",
"org.label-schema.version": "2.2.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.4",
"IPPrefixLen": 16,
"aliases": [
"fc56e973c98d",
"node-red"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:04"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"organizr": {
"docker-compose_organizr_1": {
"cmd": [],
"image": "organizr/organizr:linux-amd64",
"labels": {
"base.maintainer": "christronyxyocum,Roxedus",
"base.s6.arch": "amd64",
"base.s6.rel": "2.2.0.3",
"com.docker.compose.config-hash": "430b338b0c0892a25522e1b641a9e3a08eedd255309b1cd275b22a3362dcac58",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "organizr",
"com.docker.compose.version": "1.29.2",
"maintainer": "christronyxyocum,Roxedus",
"org.label-schema.description": "Baseimage for Organizr",
"org.label-schema.name": "organizr/base",
"org.label-schema.schema-version": "1.0",
"org.label-schema.url": "https://organizr.app/",
"org.label-schema.vcs-url": "https://github.com/organizr/docker-base",
"org.opencontainers.image.created": "2022-05-08_15",
"org.opencontainers.image.source": "https://github.com/Organizr/docker-organizr/tree/master",
"org.opencontainers.image.title": "organizr/base",
"org.opencontainers.image.url": "https://github.com/Organizr/docker-organizr/blob/master/README.md"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.3",
"IPPrefixLen": 16,
"aliases": [
"organizr",
"f3f61d8938fe"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:03"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"prometheus": {
"docker-compose_prometheus_1": {
"cmd": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/etc/prometheus/console_libraries",
"--web.console.templates=/etc/prometheus/consoles",
"--web.enable-lifecycle"
],
"image": "prom/prometheus:v2.35.0",
"labels": {
"com.docker.compose.config-hash": "7d2ce7deba1a152ebcf4fe5494384018c514f6703b5e906aef6f2e8820733cb2",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "prometheus",
"com.docker.compose.version": "1.29.2",
"maintainer": "The Prometheus Authors <prometheus-developers#googlegroups.com>"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.5",
"IPPrefixLen": 16,
"aliases": [
"04f346e6694f",
"prometheus"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:05"
}
},
"state": {
"running": true,
"status": "running"
}
}
}
}
}
}
And I need an output similar to
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
I can do that with jq easy-peasy:
jq --raw-output '.docker_compose_init_result.services\[\] | keys | .\[\]' jsondata.json
But I am not able to do it with Ansible and especially json_query (and thus JMESPath).
I was able to get one key with
jp -f jsondata.json "keys(docker_compose_init_result.services.grafana)"
[
"docker-compose_grafana_1"
]
But have no idea how to get all four. Also sometimes expressions that worked with jp did not work in Ansible with json_query, which additionally made me mad.
If anyone can give me a solution for this (wether its with json_query or not) in the best case explains how it works, I would be very glad.
Solution using only builtin filters:
docker_compose_list: "{{ docker_compose_init_result.services | dict2items
| map(attribute='value') | map('dict2items')
| flatten | map(attribute='key') }}"
which gives once expanded:
{
"docker_compose_list": [
"docker-compose_grafana_1",
"docker-compose_node-red_1",
"docker-compose_organizr_1",
"docker-compose_prometheus_1"
]
}
In a pure JMESPath way, you query should be:
docker_compose_init_result.services.*.keys(#)[]
Where:
.* is the notation for an object project, that will make you get the values under docker_compose_init_result.services, whatever the key might be.
.keys(#) is the keys() function, in order to get the keys, mapped to the current node — #, which effectively means that keys() is going to be applied to every object which are children of docker_compose_init_result.services.*, (e.g.: on docker_compose_init_result.services.grafana."docker-compose_grafana_1", docker_compose_init_result.services."node-red"."docker-compose_node-red_1", and so on)
[] is the flatten operator, in order to reduce your array of arrays to a single level array
The query below
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query) }}"
_query: 'services.*.keys(#)'
gives
docker_compose_list:
- - docker-compose_grafana_1
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
Select first items
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
map('first')|list }}"
or flatten the list
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
flatten }}"
both give
docker_compose_list:
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
Note: Be careful how you select or flatten the list. There might be a reason for the second-level keys. For example, if there are more keys in services.grafana the result might be
docker_compose_list:
- - docker-compose_grafana_1
- docker-compose_grafana_2
- docker-compose_grafana_3
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
In this case, taking the first item or flattening the list doesn't necessarily give the result you want.

sputnikdao2 - ChangePolicy - "data did not match any variant of untagged enum VersionedPolicy"

I am trying to change the policy for a deployed sputnikdao2 contract.
I am getting this error:
"ExecutionError":"`Smart contract panicked: panicked at 'Failed to deserialize input from JSON.: Error(\"data did not match any variant of untagged enum VersionedPolicy\", line: 1, column: 423)', src/proposals.rs:384:1`"
},
"transaction_outcome":{
"block_hash":"8aUiGxnJv12BASyKjPKVsYWegEmbH8Lz1LsXu7gGXFwa",
"id":"FTTFLVZzzrK7CT6KCNqWVCs67Hc5oBRHBT9TqCciqjY6",
"outcome":{
"executor_id":"hundred.testnet",
"gas_burnt":2428900339092,
"logs":[
],
"receipt_ids":[
"EuNWubtxcY9YjcbTxSwrrYj59GBVj8u6a8RktQj7tHSh"
],
"status":{
"SuccessReceiptId":"EuNWubtxcY9YjcbTxSwrrYj59GBVj8u6a8RktQj7tHSh"
},
"tokens_burnt":"242890033909200000000"
},
"proof":[
{
"direction":"Left",
"hash":"9eTyjRrHrNP1Bmw4rDgSouGmvxP7Lg3EaoUn15qBQH3h"
},
{
"direction":"Right",
"hash":"4NLf8mPom49oVbXmB2ouujxctjbyZC5FBi5ny1NFcXYj"
}
]
}
}
you can see more information here :
https://gist.github.com/hiba-machfej/3a681d22fc2310966ca7692ec3a189d2
I was trying to send this:
'{"proposal": {"description": "Add New Council", "kind": {"ChangePolicy": { "policy": { "roles": [{ "name": "all", "kind": "Everyone", "permissions": [ "*:AddProposal" ], "vote_policy": "{}"}], "default_vote_policy": { "weight_kind": "RoleWeight", "quorum": "0", "threshold": [ 1, 2 ] }, "proposal_bond": "1000000000000000000000000", "proposal_period": "604800000000000", "bounty_bond": "1000000000000000000000000", "bounty_forgiveness_period": "86400000000000"}}}}}' \
--accountId hundred.testnet \
--amount 1
I re-wrote the objects again and it worked:
'{"proposal": {"description": "Add New Council", "kind": {"ChangePolicy": { "policy": { "roles": [{ "name": "all", "kind": "Everyone", "permissions": ["*:AddProposal", "*:Finalize"], "vote_policy": {}}], "default_vote_policy": { "weight_kind": "RoleWeight", "quorum": "0", "threshold": [ 1, 2 ]}, "proposal_bond": "1000000000000000000000000", "proposal_period": "604800000000000", "bounty_bond": "1000000000000000000000000", "bounty_forgiveness_period": "86400000000000" }}}}}' \
--accountId hundred.testnet \
--amount 1
This is the recipt:
https://explorer.testnet.near.org/transactions/DxXLUUcx2jcLdoCFT2HbhSinWV6zjSREUkNXnN3kkHD4
I think there was an error in json format in the first code I was running.

Gaussian constraint in `normfactor`

I would like to understand how to impose a gaussian constraint with central value expected_yield and error expected_y_error on a normfactor modifier. I want to fit observed_data with a single sample MC_derived_sample. My goal is to extract the bu_y modifier such that the integral of MC_derived_sample scaled by bu_y is gaussian-constrained to expected_yield +/- expected_y_error.
My present attempt employs the normsys modifier as follows:
spec = {
"channels": [
{
"name": "singlechannel",
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
},
]
},
],
"observations": [
{
"name": "singlechannel",
"data": observed_data,
}
],
"measurements": [
{
"name": "sig_y_extraction",
"config": {
"poi": "bu_y",
"parameters": [
{"name":"bu_y", "bounds": [[(1 - (5*expected_y_error/expected_yield), 1+(5*expected_y_error/expected_yield)]], "inits":[1.]},
]
}
}
],
"version": "1.0.0"
}
My thinking is that normsys will introduce a gaussian constraint about unity on the sample scaled by expected_yield.
Please can you provide me any feedback as to whether this approach is correct, please?
In addition, suppose I wanted to include a staterror modifier for the Barlow-Beeston lite implementation, would this be the correct way of doing so?
"samples": [
{
"name": "constrained_template",
"data": MC_derived_sample*expected_yield, #expect normalisation around 1
"modifiers": [
{"name": "BB_lite_uncty", "type": "staterror", "data": np.sqrt(MC_derived_sample)*expected_yield }, #assume poisson error and scale by central value of constraint
{"name": "bu_y", "type": "normfactor", "data": None },
{"name": "bu_y_constr", "type": "normsys",
"data":
{"lo" : 1 - (expected_y_error/expected_yield),
"hi" : 1 + (expected_y_error/expected_yield)}
},
]
}
Thanks a lot in advance for your help,
Blaise

Ansible JSON parsing empty string

I am trying to learn JSON parsing using jmesquery in Ansible.
Please consider the following play:
---
- name: GET ALL THE INTERFACES
junos_command:
commands: show configuration interfaces | display json
register: A
- name: DISPLAY VARIABLE A CONTENTS
debug:
var: A.stdout_lines
- name: JOSN QUERY TO STORE PORTS IN NEW VARIABLE ALL_PORTS
set_fact:
ALL_PORTS: "{{ A.stdout_lines | json_query(jmesquery) }}"
vars:
jmesquery: 'configuration.interfaces.interface[*].name'
- name: DISPLAY VARIABLE ALL_PORTS CONTENTS
debug:
var: ALL_PORTS
#
Based on the JSON query, the port ge-0/0/0 will be stored in ALL_PORT, but we do not see that when we run the playbook, debug on ALL_PORT shows it is empty.
PLAY [ROUTER-STIG-PLAYBOOK] ************************************************************************************************
TASK [STIG_ROUTER : GET ALL THE INTERFACES] ********************************************************************************
ok: [192.168.22.9]
TASK [STIG_ROUTER : DISPLAY VARIABLE A CONTENTS] ***************************************************************************
ok: [192.168.22.9] => {
"A.stdout_lines": [
{
"configuration": {
"#": {
"junos:changed-localtime": "2020-05-10 11:14:49 UTC",
"junos:changed-seconds": "1589109289",
"xmlns": "http://xml.juniper.net/xnm/1.1/xnm"
},
"interfaces": {
"interface": [
{
"name": "ge-0/0/0",
"unit": [
{
"family": {
"inet": {
"address": [
{
"name": "192.168.22.9/24"
}
]
}
},
"name": 0
}
]
}
]
},
"security": {
"policies": {
"global": {
"policy": [
{
"match": {
"application": [
"any"
],
"destination-address": [
"any"
],
"source-address": [
"any"
]
},
"name": "TEST",
"then": {
"permit": [
null
]
}
}
]
}
},
"screen": {
"ids-option": [
{
"icmp": {
"ping-death": [
null
]
},
"ip": {
"source-route-option": [
null
],
"tear-drop": [
null
]
},
"name": "untrust-screen",
"tcp": {
"land": [
null
],
"syn-flood": {
"alarm-threshold": 1024,
"attack-threshold": 200,
"destination-threshold": 2048,
"queue-size": 2000,
"source-threshold": 1024,
"timeout": 20
}
}
}
]
},
"zones": {
"security-zone": [
{
"name": "trust",
"tcp-rst": [
null
]
},
{
"name": "untrust",
"screen": "untrust-screen"
},
{
"host-inbound-traffic": {
"protocols": [
{
"name": "all"
}
],
"system-services": [
{
"name": "all"
}
]
},
"interfaces": [
{
"name": "ge-0/0/0.0"
}
],
"name": "A"
}
]
}
},
"system": {
"license": {
"autoupdate": {
"url": [
{
"name": "https://ae1.juniper.net/junos/key_retrieval"
}
]
}
},
"root-authentication": {
"encrypted-password": "$6$thcHCjAV$e3o5ZRNWv7WtysOxuKpBP2X0cA3QDNtWYyCSBAUkImSEsulEGTgfwEQBa12Wll0fegpwvZfTHLvCbDUIW1n211"
},
"services": {
"ftp": [
null
],
"netconf": {
"ssh": [
null
]
},
"ssh": {
"root-login": "allow"
}
},
"syslog": {
"file": [
{
"contents": [
{
"any": [
null
],
"name": "any"
},
{
"info": [
null
],
"name": "authorization"
}
],
"name": "messages"
},
{
"contents": [
{
"any": [
null
],
"name": "interactive-commands"
}
],
"name": "interactive-commands"
}
],
"user": [
{
"contents": [
{
"emergency": [
null
],
"name": "any"
}
],
"name": "*"
}
]
}
},
"version": "20191026.124700_builder.r1063854"
}
}
]
}
TASK [STIG_ROUTER : JOSN QUERY TO STORE PORTS IN NEW VARIABLE ALL_PORTS] ***************************************************
ok: [192.168.22.9]
TASK [STIG_ROUTER : DISPLAY VARIABLE ALL_PORTS CONTENTS] *******************************************************************
ok: [192.168.22.9] => {
"ALL_PORTS": ""
}
#
I used online JSON query tester.
https://jsonpath.com/
When I checked my my query "configuration.interfaces.interface[*].name" against A.stdout_lines, I found the expected response i.e
[
ge-0/0/0
]
Any feedback/guidance is appreciated!!
Have a good weekend!!
Your JMESPath may very well be correct for one object but as its name implies, and as your debug: var=A.stdout_lines shows, stdout_lines is a list
So, you can do one of several things:
recognize that your stdout_lines only contains one object and just feed that into json_query as {{ A.stdout_lines[0] | json_query(jmesquery) }}
use the map filter to apply that same filter to every list item, with something like {{ A.stdout_lines | map("json_query", jmesquery) | list }}
rewrite your JMESPath to apply that filter to the input list, akin to json_query("[*].configuration.AND THE REST HERE")
Those last two will naturally produce a different output shape, since they are lists of lists, and so it will look like [['ge-0/0/0']] when output, but they do have the advantage that if your stdout_lines ever does mysteriously start to contain more objects, they will be json_query-ied as expected

filter json via bash - case insensitive

I have json code and need to filter it by the value of the attribute DNSName. The filter must be case insensitive.
How can I do that? Is there a possibility to solve it with jq?
This is how I create the json code:
aws elbv2 describe-load-balancers --region=us-west-2 | jq
My unfiltered source json code looks like this:
{
"LoadBalancers": [
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB1/a00000000000000a",
"State": {
"Code": "active"
},
"DNSName": "MY-LB1-123454321.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-00100100",
"sg-01001000",
"sg-10010001"
],
"LoadBalancerName": "MY-LB1",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-17171717",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-27272727",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-37373737",
"ZoneName": "us-west-2b"
}
]
},
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB2/b00000000000000b",
"State": {
"Code": "active"
},
"DNSName": "MY-LB2-9876556789.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-88818881"
],
"LoadBalancerName": "MY-LB2",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-54545454",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-64646464",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-74747474",
"ZoneName": "us-west-2b"
}
]
}
]
}
I now want some bash code to filter this result for the record with the DNSName property value MY-LB2-9876556789.us-west-2.elb.amazonaws.com, and need the entire LoadBalancer object back as a result. This is how I wish my result to look like:
{
"IpAddressType": "ipv4",
"VpcId": "vpc-abcdabcd",
"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-west-2:000000000000:loadbalancer/app/MY-LB2/b00000000000000b",
"State": {
"Code": "active"
},
"DNSName": "MY-LB2-9876556789.us-west-2.elb.amazonaws.com",
"SecurityGroups": [
"sg-88818881"
],
"LoadBalancerName": "MY-LB2",
"CreatedTime": "2018-01-01T00:00:00.000Z",
"Scheme": "internet-facing",
"Type": "application",
"CanonicalHostedZoneId": "ZZZZZZZZZZZZZ",
"AvailabilityZones": [
{
"SubnetId": "subnet-54545454",
"ZoneName": "us-west-2a"
},
{
"SubnetId": "subnet-64646464",
"ZoneName": "us-west-2c"
},
{
"SubnetId": "subnet-74747474",
"ZoneName": "us-west-2b"
}
]
}
Does anyone know how to do it?
Update:
This solution works, but is not case insensitive:
aws elbv2 describe-load-balancers --region=us-west-2 | jq -c '.LoadBalancers[] | select(.DNSName | contains("MY-LB2"))'
Update:
This solution seems to work even better:
aws elbv2 describe-load-balancers --region=us-west-2 | jq -c '.LoadBalancers[] | select(.DNSName | match("my-lb2";"i"))'
But I did not have the chance to test in detail yet.
You probably should be using test/2 rather than match/2, but in either case, since the problem description calls for
case-insensitive equality, you would use an anchored regex:
.LoadBalancers[]
| select(.DNSName | test("^my-lb2-9876556789.us-west-2.elb.amazonaws.com$";"i"))
With the caveat that ascii_upcase only translates ASCII characters, it might be more efficient to use it:
.LoadBalancers[]
| select(.DNSName | ascii_upcase == "MY-LB2-9876556789.US-WEST-2.ELB.AMAZONAWS.COM")

Resources