Getting the values of keys of Ansible JSON output - ansible

I have the following JSON data
{
"docker_compose_init_result": {
"changed": true,
"failed": false,
"services": {
"grafana": {
"docker-compose_grafana_1": {
"cmd": [],
"image": "grafana/grafana:8.5.14",
"labels": {
"com.docker.compose.config-hash": "4d0b5dd6e697a8fe5bf5074192770285e54da43ad32cc34ba9c56505cb709431",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "grafana",
"com.docker.compose.version": "1.29.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.2",
"IPPrefixLen": 16,
"aliases": [
"3d19f54271b2",
"grafana"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:02"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"node-red": {
"docker-compose_node-red_1": {
"cmd": [],
"image": "nodered/node-red:2.2.2",
"labels": {
"authors": "Dave Conway-Jones, Nick O'Leary, James Thomas, Raymond Mouthaan",
"com.docker.compose.config-hash": "5610863d4b28b11645acb5651e7bab174125743dc86a265969788cc8ac782efe",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "node-red",
"com.docker.compose.version": "1.29.2",
"org.label-schema.arch": "",
"org.label-schema.build-date": "2022-02-18T21:01:04Z",
"org.label-schema.description": "Low-code programming for event-driven applications.",
"org.label-schema.docker.dockerfile": ".docker/Dockerfile.alpine",
"org.label-schema.license": "Apache-2.0",
"org.label-schema.name": "Node-RED",
"org.label-schema.url": "https://nodered.org",
"org.label-schema.vcs-ref": "",
"org.label-schema.vcs-type": "Git",
"org.label-schema.vcs-url": "https://github.com/node-red/node-red-docker",
"org.label-schema.version": "2.2.2"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.4",
"IPPrefixLen": 16,
"aliases": [
"fc56e973c98d",
"node-red"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:04"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"organizr": {
"docker-compose_organizr_1": {
"cmd": [],
"image": "organizr/organizr:linux-amd64",
"labels": {
"base.maintainer": "christronyxyocum,Roxedus",
"base.s6.arch": "amd64",
"base.s6.rel": "2.2.0.3",
"com.docker.compose.config-hash": "430b338b0c0892a25522e1b641a9e3a08eedd255309b1cd275b22a3362dcac58",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "organizr",
"com.docker.compose.version": "1.29.2",
"maintainer": "christronyxyocum,Roxedus",
"org.label-schema.description": "Baseimage for Organizr",
"org.label-schema.name": "organizr/base",
"org.label-schema.schema-version": "1.0",
"org.label-schema.url": "https://organizr.app/",
"org.label-schema.vcs-url": "https://github.com/organizr/docker-base",
"org.opencontainers.image.created": "2022-05-08_15",
"org.opencontainers.image.source": "https://github.com/Organizr/docker-organizr/tree/master",
"org.opencontainers.image.title": "organizr/base",
"org.opencontainers.image.url": "https://github.com/Organizr/docker-organizr/blob/master/README.md"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.3",
"IPPrefixLen": 16,
"aliases": [
"organizr",
"f3f61d8938fe"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:03"
}
},
"state": {
"running": true,
"status": "running"
}
}
},
"prometheus": {
"docker-compose_prometheus_1": {
"cmd": [
"--config.file=/etc/prometheus/prometheus.yml",
"--storage.tsdb.path=/prometheus",
"--web.console.libraries=/etc/prometheus/console_libraries",
"--web.console.templates=/etc/prometheus/consoles",
"--web.enable-lifecycle"
],
"image": "prom/prometheus:v2.35.0",
"labels": {
"com.docker.compose.config-hash": "7d2ce7deba1a152ebcf4fe5494384018c514f6703b5e906aef6f2e8820733cb2",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "docker-compose",
"com.docker.compose.project.config_files": "/appl/docker-compose/docker-compose-init.yml",
"com.docker.compose.project.working_dir": "/appl/docker-compose",
"com.docker.compose.service": "prometheus",
"com.docker.compose.version": "1.29.2",
"maintainer": "The Prometheus Authors <prometheus-developers#googlegroups.com>"
},
"networks": {
"docker-compose_homeserver-net": {
"IPAddress": "172.20.0.5",
"IPPrefixLen": 16,
"aliases": [
"04f346e6694f",
"prometheus"
],
"globalIPv6": "",
"globalIPv6PrefixLen": 0,
"links": null,
"macAddress": "02:42:ac:14:00:05"
}
},
"state": {
"running": true,
"status": "running"
}
}
}
}
}
}
And I need an output similar to
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
I can do that with jq easy-peasy:
jq --raw-output '.docker_compose_init_result.services\[\] | keys | .\[\]' jsondata.json
But I am not able to do it with Ansible and especially json_query (and thus JMESPath).
I was able to get one key with
jp -f jsondata.json "keys(docker_compose_init_result.services.grafana)"
[
"docker-compose_grafana_1"
]
But have no idea how to get all four. Also sometimes expressions that worked with jp did not work in Ansible with json_query, which additionally made me mad.
If anyone can give me a solution for this (wether its with json_query or not) in the best case explains how it works, I would be very glad.

Solution using only builtin filters:
docker_compose_list: "{{ docker_compose_init_result.services | dict2items
| map(attribute='value') | map('dict2items')
| flatten | map(attribute='key') }}"
which gives once expanded:
{
"docker_compose_list": [
"docker-compose_grafana_1",
"docker-compose_node-red_1",
"docker-compose_organizr_1",
"docker-compose_prometheus_1"
]
}

In a pure JMESPath way, you query should be:
docker_compose_init_result.services.*.keys(#)[]
Where:
.* is the notation for an object project, that will make you get the values under docker_compose_init_result.services, whatever the key might be.
.keys(#) is the keys() function, in order to get the keys, mapped to the current node — #, which effectively means that keys() is going to be applied to every object which are children of docker_compose_init_result.services.*, (e.g.: on docker_compose_init_result.services.grafana."docker-compose_grafana_1", docker_compose_init_result.services."node-red"."docker-compose_node-red_1", and so on)
[] is the flatten operator, in order to reduce your array of arrays to a single level array

The query below
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query) }}"
_query: 'services.*.keys(#)'
gives
docker_compose_list:
- - docker-compose_grafana_1
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
Select first items
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
map('first')|list }}"
or flatten the list
docker_compose_list: "{{ docker_compose_init_result|
json_query(_query)|
flatten }}"
both give
docker_compose_list:
- docker-compose_grafana_1
- docker-compose_node-red_1
- docker-compose_organizr_1
- docker-compose_prometheus_1
Note: Be careful how you select or flatten the list. There might be a reason for the second-level keys. For example, if there are more keys in services.grafana the result might be
docker_compose_list:
- - docker-compose_grafana_1
- docker-compose_grafana_2
- docker-compose_grafana_3
- - docker-compose_node-red_1
- - docker-compose_organizr_1
- - docker-compose_prometheus_1
In this case, taking the first item or flattening the list doesn't necessarily give the result you want.

Related

Ansible combine two results based on the key

If someone can suggest a resource to self educate, would be much appreciated. At the moment I am learning by examples based on what works. Sorry if the terminology is incorrect
I have two results in json format (dictionary?) from the commands sent to Netscaler API:
Result1 lists certificate information:
{
"errorcode": 0,
"message": "Done",
"severity": "NONE",
"sslcertkey": [
{
"certkey": "certkey1.pair",
"daystoexpiration": 0,
"status": "Expired",
"subject": "easdm.test.com"
},
{
"certkey": "certkey2.pair",
"daystoexpiration": 0,
"status": "Expired",
"subject": " CN=timer.test.com"
},
Result2 lists which certificate is bound to a virtual server:
{
"errorcode": 0,
"message": "Done",
"severity": "NONE",
"sslcertkey_sslvserver_binding": [
{
"certkey": "certkey1.pair",
"data": "1",
"servername": "easdm_gslb_btfin_pri_lb_vs",
"stateflag": "2",
"version": 1
},
{
"certkey": "certkey2.pair",
"data": "2",
"servername": "timer_gslb_btfin_pri_lb_vs",
"stateflag": "2",
"version": 1
},
I want to combine two results into Result3, so that it will combine info from Result1 with Result2 if "certkey" is matching between the results:
{
"certkey": "certkey1.pair",
"daystoexpiration": 0,
"status": "Expired",
"subject": "easdm.test.com",
"servername": "easdm_gslb_btfin_pri_lb_vs"
},
{
"certkey": "certkey2.pair",
"daystoexpiration": 0,
"status": "Expired",
"subject": " CN=timer.test.com",
"servername": "timer_gslb_btfin_pri_lb_vs"
},
There are 100's of entries in each result and some results in Result1 will not have a match in Result2 as cert is not used anywhere.
I tried using just a simple
- debug:
msg: '{{ result.json.sslcertkey | combine(result2.json.sslcertkey_sslvserver_binding, recursive=True) }}'
but it seem to only show the last match and not all of the matches.
Use filter community.general.lists_mergeby
result3: "{{ result1.sslcertkey|
community.general.lists_mergeby(result2.sslcertkey_sslvserver_binding,
'certkey') }}"
gives
result3:
- certkey: certkey1.pair
data: '1'
daystoexpiration: 0
servername: easdm_gslb_btfin_pri_lb_vs
stateflag: '2'
status: Expired
subject: easdm.test.com
version: 1
- certkey: certkey2.pair
data: '2'
daystoexpiration: 0
servername: timer_gslb_btfin_pri_lb_vs
stateflag: '2'
status: Expired
subject: ' CN=timer.test.com'
version: 1

what's the simplest way to calculate the sum of values at the end of this jq command?

I see that jq can calculate addition as simply as jq 'map(.duration) | add' but I've got a more complex command and I can't figure out how to perform this add at the end of it.
I'm starting with data like this:
{
"object": "list",
"data": [
{
"id": "in_1HW85aFGUwFHXzvl8wJbW7V7",
"object": "invoice",
"account_country": "US",
"customer_name": "clientOne",
"date": 1601244686,
"livemode": true,
"metadata": {},
"paid": true,
"status": "paid",
"total": 49500
},
{
"id": "in_1HJlIZFGUwFHXzvlWqhegRkf",
"object": "invoice",
"account_country": "US",
"customer_name": "clientTwo",
"date": 1598297143,
"livemode": true,
"metadata": {},
"paid": true,
"status": "paid",
"total": 51000
},
{
"id": "in_1HJkg5FGUwFHXzvlYp2uC63C",
"object": "invoice",
"account_country": "US",
"customer_name": "clientThree",
"date": 1598294757,
"livemode": true,
"metadata": {},
"paid": true,
"status": "paid",
"total": 57000
},
{
"id": "in_1H8B0pFGUwFHXzvlU6nrOm6I",
"object": "invoice",
"account_country": "US",
"customer_name": "clientThree",
"date": 1595536051,
"livemode": true,
"metadata": {},
"paid": true,
"status": "paid",
"total": 20000
}
],
"has_more": true,
"url": "/v1/invoices"
}
and my jq command looks like:
cat example-data.json |
jq -C '[.data[]
| {invoice_id: .id, client: .customer_name, date: .date | strftime("%Y-%m-%d"), amount: .total, status: .status}
| .amount = "$" + (.amount/100|tostring)]
| sort_by(.date)'
which nicely gives me output like:
[
{
"invoice_id": "in_1H8B0pFGUwFHXzvlU6nrOm6I",
"client": "clientThree",
"date": "2020-07-23",
"amount": "$200",
"status": "paid"
},
{
"invoice_id": "in_1HJlIZFGUwFHXzvlWqhegRkf",
"client": "clientTwo",
"date": "2020-08-24",
"amount": "$510",
"status": "paid"
},
{
"invoice_id": "in_1HJkg5FGUwFHXzvlYp2uC63C",
"client": "clientThree",
"date": "2020-08-24",
"amount": "$570",
"status": "paid"
},
{
"invoice_id": "in_1HW85aFGUwFHXzvl8wJbW7V7",
"client": "clientOne",
"date": "2020-09-27",
"amount": "$495",
"status": "paid"
}
]
and I want to add a sum/total at the end of that, something like Total: $1775, so that the entire output would look like this:
[
{
"invoice_id": "in_1H8B0pFGUwFHXzvlU6nrOm6I",
"client": "clientThree",
"date": "2020-07-23",
"amount": "$200",
"status": "paid"
},
{
"invoice_id": "in_1HJlIZFGUwFHXzvlWqhegRkf",
"client": "clientTwo",
"date": "2020-08-24",
"amount": "$510",
"status": "paid"
},
{
"invoice_id": "in_1HJkg5FGUwFHXzvlYp2uC63C",
"client": "clientThree",
"date": "2020-08-24",
"amount": "$570",
"status": "paid"
},
{
"invoice_id": "in_1HW85aFGUwFHXzvl8wJbW7V7",
"client": "clientOne",
"date": "2020-09-27",
"amount": "$495",
"status": "paid"
}
]
Total: $1775
Is there a neat/tidy way to enhance this jq command to achieve this?
Or even, since I'm invoking this in a shell script, a dirty/ugly way with bash?
If any of your output is going to be raw, you need to pass -r; it'll just be ignored for data items that aren't strings.
Anyhow -- if you write (expr1, expr2), then your input will be passed through both expressions. Thus:
jq -Cr '
([.data[]
| {invoice_id: .id,
client: .customer_name,
date: .date | strftime("%Y-%m-%d"),
amount: .total,
status: .status}
| .amount = "$" + (.amount/100|tostring)
] | sort_by(.date)),
"Total: $\([.data[] | .total] | add | . / 100)"
'
In case you decide after all to emit valid JSON, here is a modular answer to the question that makes it easy to formulate alternative approaches, and which postpones the conversion of .amount to dollars for efficiency:
def todollar:
"$" + tostring;
def json:
[.data[]
| {invoice_id: .id,
client: .customer_name,
date: .date | strftime("%Y-%m-%d"),
amount: (.total/100),
status: .status} ]
| sort_by(.date) ;
json
| map_values(.amount |= todollar),
"Total: " + (map(.amount) | add | todollar)
As noted elsewhere, you will probably want to use the -r command-line option.

How to search key by passing value in json_query Ansible

I am calling API and getting below output but from the output and i want to find the key based on value input and my input value is "vpc-tz" how to achieve this in ansible using json_query?
{
"json": {
"allScopes": [
{
"
"clusters": {
"clusters": [
{
"cluster": {
"clientHandle": "",
"type": {
"name": "ClusterComputeResource"
},
"universalRevision": 0,
"vsmUuid": "423B1819-9495-4F10-A96A-6D8284E51B29"
}
}
]
},
"controlPlaneMode": "UNICAST_MODE",
"description": "",
"extendedAttributes": [
],
"id": "vdnscope-6",
"isTemporal": false,
"isUniversal": false,
"name": "vpc-tz",
"nodeId": "85e0073d-0e5a-4f04-889b-42df771aebf8",
"objectId": "vdnscope-6",
"objectTypeName": "VdnScope",
"revision": 0,
"type": {
"name": "VdnScope"
},
"universalRevision": 0,
"virtualWireCount": 0,
"vsmUuid": "423B1819-9495-4F10-A96A-6D8284E51B29"
},
]
}
}
Here is a query which works:
json.allScopes[?name=='vpc-tz'].name

how to groupBy and map at same transform on dataweave 2?

I have this dataweave 1.0 script that works well:
%dw 1.0
%output application/java
---
flowVars.worklogs groupBy $.author.accountId map {
accountId: $.author.accountId[0],
displayName: $.author.displayName[0],
timeSpentMinutesMonth: (sum $.timeSpentSeconds) / 3600,
billableMinutesMonth: (sum $.billableSeconds) / 3600,
emailAddress: ''
}
However, now I am updating the code for mule 4, and I couldn't make this transformation goes well.
I already tried to update it like this:
%dw 2.0
output application/java
---
vars.worklogs groupBy $.author.accountId map {
accountId: $.author.accountId[0],
displayName: $.author.displayName[0],
timeSpentMinutesMonth: (sum($.timeSpentSeconds)) / 3600,
billableMinutesMonth: (sum($.billableSeconds)) / 3600,
emailAddress: ''
}
But I got this error:
org.mule.runtime.core.internal.message.ErrorBuilder$ErrorImplementation
{
description="You called the function 'map' with these arguments:
1: Object ({"5d8b681427fe990dc2d3404a": [{self: "https://api.tempo.io/core/3/worklogs/54...)
2: Function ((v:Any, i:Any) -> ???)
But it expects arguments of these types:
1: Array
2: Function
4| vars.worklogs groupBy $.author.accountId map (v, i) -> {
| ...
10| }
Trace:
at map (line: 4, column: 1)
at main (line: 4, column: 42)" evaluating expression: "%dw 2.0
output application/java
---
vars.worklogs groupBy $.author.accountId map (v, i) -> {
accountId: v.author.accountId[0],
displayName: v.author.displayName[0],
timeSpentMinutesMonth: (sum(v.timeSpentSeconds)) / 3600,
billableMinutesMonth: (sum(v.billableSeconds)) / 3600,
emailAddress: ''
}".
The variable worklogs contains a json:
[
{
"self": "https://api.tempo.io/core/3/worklogs/5408",
"tempoWorklogId": 5408,
"jiraWorklogId": 15408,
"issue": {
"self": "https://xpto.atlassian.net/rest/api/2/issue/ABC-123",
"key": "ABC-123",
"id": 11005
},
"timeSpentSeconds": 28800,
"billableSeconds": 28800,
"startDate": "2020-01-31",
"startTime": "00:00:00",
"description": "creating new song",
"createdAt": "2020-02-28T13:30:58Z",
"updatedAt": "2020-02-28T13:30:58Z",
"author": {
"self": "https://xpto.atlassian.net/rest/api/2/user?accountId=5d8b681427fe990dc2d3404a",
"accountId": "5d8b681427fe990dc2d3404a",
"displayName": "john lennon"
},
"attributes": {
"self": "https://api.tempo.io/core/3/worklogs/5408/work-attribute-values",
"values": [
]
}
},
{
"self": "https://api.tempo.io/core/3/worklogs/5166",
"tempoWorklogId": 5166,
"jiraWorklogId": 15166,
"issue": {
"self": "https://xpto.atlassian.net/rest/api/2/issue/CDE-99",
"key": "CDE-99",
"id": 10106
},
"timeSpentSeconds": 3600,
"billableSeconds": 3600,
"startDate": "2020-01-31",
"startTime": "00:00:00",
"description": "call with stakeholders",
"createdAt": "2020-02-10T18:30:03Z",
"updatedAt": "2020-02-10T18:30:03Z",
"author": {
"self": "https://xpto.atlassian.net/rest/api/2/user?accountId=5b27ad3902cfea1ba6411c3f",
"accountId": "5b27ad3902cfea1ba6411c3f",
"displayName": "chandler bing"
},
"attributes": {
"self": "https://api.tempo.io/core/3/worklogs/5166/work-attribute-values",
"values": [
]
}
},
{
"self": "https://api.tempo.io/core/3/worklogs/5165",
"tempoWorklogId": 5165,
"jiraWorklogId": 15165,
"issue": {
"self": "https://xpto.atlassian.net/rest/api/2/issue/CDE-99",
"key": "CDE-99",
"id": 10081
},
"timeSpentSeconds": 3600,
"billableSeconds": 3600,
"startDate": "2020-01-31",
"startTime": "00:00:00",
"description": "planning tulsa work trip",
"createdAt": "2020-02-10T18:29:30Z",
"updatedAt": "2020-02-10T18:29:30Z",
"author": {
"self": "https://xpto.atlassian.net/rest/api/2/user?accountId=5b27ad3902cfea1ba6411c3f",
"accountId": "5b27ad3902cfea1ba6411c3f",
"displayName": "chandler bing"
},
"attributes": {
"self": "https://api.tempo.io/core/3/worklogs/5165/work-attribute-values",
"values": [
]
}
},
{
"self": "https://api.tempo.io/core/3/worklogs/5164",
"tempoWorklogId": 5164,
"jiraWorklogId": 15164,
"issue": {
"self": "https://xpto.atlassian.net/rest/api/2/issue/CDE-99",
"key": "CDE-99",
"id": 10108
},
"timeSpentSeconds": 7200,
"billableSeconds": 7200,
"startDate": "2020-01-31",
"startTime": "00:00:00",
"description": "exporting data to cd-rom",
"createdAt": "2020-02-10T18:29:08Z",
"updatedAt": "2020-02-10T18:29:47Z",
"author": {
"self": "https://xpto.atlassian.net/rest/api/2/user?accountId=5b27ad3902cfea1ba6411c3f",
"accountId": "5b27ad3902cfea1ba6411c3f",
"displayName": "chandler-bing"
},
"attributes": {
"self": "https://api.tempo.io/core/3/worklogs/5164/work-attribute-values",
"values": [
]
}
}
]
I don't understanding why this isn't working. I read the docs and found out that groupBy and map in dw 2.0 works pretty much the same as dw 1.0.
According to this question, it is necessary to add a pluck after groupBy, and not add map:
%dw 2.0
output application/json
---
vars.worklogs groupBy $.author.accountId pluck {
accountId: $.author.accountId[0],
displayName: $.author.displayName[0],
timeSpentMinutesMonth: (sum($.timeSpentSeconds)) / 3600,
billableMinutesMonth: (sum($.billableSeconds)) / 3600,
emailAddress: ''
}
The problem is that in DataWeave 1.0 map() accepted an object as an argument, in addition to arrays. In DataWeave 2.0 it is defined only for arrays and null. You need to iterate over the keys in the result object of groubBy().

Ansible - How to combine list attributes?

I have two separate lists. The first is a list (base_list) with basic parameters, and the second is a list (dev_list) with parameters for a specific stand.
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
{
"name": "grafana",
"path": "grafana/conf/nginx.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json"
},
{
"name": "grafana",
"path": "grafana/conf/config.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
My goal is to combine these two lists to get one item.name with several item.path and item.src. Paths that look like this:
"end_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
]
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
]
What would be the best way to do this?
This would probably be easier with a custom Python filter, but here's a solution using Ansible's built-in filters:
---
- hosts: localhost
gather_facts: false
vars:
"base_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/logstash.yml",
"src": "/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml"
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": "/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json"
},
]
"dev_list": [
{
"name": "kibana",
"path": "kibana/conf/kibana.xml",
"src": "/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
},
{
"name": "logstash",
"path": "logstash/conf/jvm.options",
"src": "/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
}
]
tasks:
- set_fact:
end_list: >-
{{ end_list|default([]) + [
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
]}}
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
- debug:
var: end_list
This was a little tricky to put together, so I'll try to describe the various parts:
The loop uses the zip_longest filter. Given the lists list1=[1, 2, 3] and list2=[11, 12], list1|zip_longest(list2) would produce [[1,11], [2,12], [3,None]] (that is, by default, zip_longest will use None as a fill value if one list is shorter than the other). By setting the fillvalue parameter, we can use a value other than None. In this case...
loop: >-
{{ base_list|zip_longest(dev_list,
fillvalue={'path': false, 'src': false})|list }}
...We're setting the fill value to a dictionary with stub values for path and src, since this makes the rest of the expression easier.
The meat of the solution is of course the set_fact action, which in simplified form looks like:
end_list: "{{ end_list|default([]) + [{...a dictionary...}] }}"
In other words, for each iteration of the loop, this will append a new dictionary to end_list.
We create the dictionary like this:
{
'name': item.0.name,
'path': item.1.path|ternary([item.0.path, item.1.path], item.0.path),
'src': item.1.src|ternary([item.0.src, item.1.src], item.1.src)
}
We're using the ternary filter here, which evaluates it's input as a boolean; if it's true, it selects the first argument, otherwise the second. Here we're taking advantage of the fillvalue we passed to the zip_longest filter: if dev_list is shorter than base_list, we'll have some items for which item.1.path and item.1.src are false, causing the ternary filter to select the second value (either item.0.path or item.1.src). In other cases, we build a list by combining the values from each of base_list and dev_list.
The result of running this playbook looks like:
ok: [localhost] => {
"end_list": [
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
},
{
"name": "grafana",
"path": "grafana/conf/grafana.json",
"src": false
}
]
}
Let me know if that helps, and whether or not the resulting data structure is what you were looking for. I had to make a few assumptions since your example end_list contained invalid syntax, so I took a guess at what you wanted.
Assuming you had well-formed json and those are properties on the root object, jq is perfectly suited for this. Group the contents of the arrays by name then generate the appropriate result objects.
$ jq '{
end_combine: (
.base_list + .dev_list
| group_by(.name)
| map({ name: .[0].name, path: map(.path), src: map(.src) })
)
}' input.json
{
"end_combine": [
{
"name": "grafana",
"path": [
"grafana/conf/grafana.json",
"grafana/conf/nginx.json",
"grafana/conf/config.json"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/grafana/conf/grafana.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/nginx.json",
"/Users/ansible/inventories/_base/group_vars/grafana/conf/config.json"
]
},
{
"name": "kibana",
"path": [
"kibana/conf/kibana.xml",
"kibana/conf/kibana.xml"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/kibana/conf/kibana.xml",
"/Users/ansible/inventories/dev-st/group_vars/kibana/conf/kibana.xml"
]
},
{
"name": "logstash",
"path": [
"logstash/conf/logstash.yml",
"logstash/conf/jvm.options"
],
"src": [
"/Users/ansible/inventories/_base/group_vars/logstash/conf/logstash.yml",
"/Users/ansible/inventories/dev-st/group_vars/logstash/conf/jvm.options"
]
}
]
}

Resources