I was wondering if it is possible to reduce the return payload of this query Result:
{
"nodes": [
{
"topic": {
"name": "typescript"
}
},
{
"topic": {
"name": "discord"
}
},
{
"topic": {
"name": "discord-bot"
}
},
{
"topic": {
"name": "discordjs"
}
},
{
"topic": {
"name": "discordjs-commando"
}
},
{
"topic": {
"name": "mbti-personality"
}
},
{
"topic": {
"name": "mbti"
}
},
{
"topic": {
"name": "typeorm"
}
}
]
}
Into something like this:
{
"nodes": ["typescript", "discord", "discord-bot", "discordjs", "discordjs-commando", "mbti-personality", "mbti", "typeorm"]
}
I find it very verbose and unnecessary
I am not the owner of the API. So that concerns only the query. (It the Github's GraphQL API)
I'm new to GraphQL and don't undestand the principles yet, so I don't know the terms to search for.
Related
I am new to the mongoDB aggregation and I have this situation. I have this Json and I need to convert by "select" this object:
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"adress": [
{
"localization": "DEATH STAR",
"createDate": 1627990848665
},
{
"localization": "TATOOINE",
"createDate": 1627990555665
},
]
}
this way:
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"localization": "DEATH STAR",
"createDate": 1627990848665
},
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"localization": "TATOOINE",
"createDate": 1627990555665
}
So, after my query is complete, I will have 02 objects instead of 01. How can I do this?
I would like to do this via select because after converting I will sort by createDate and limit the number of records to return to the API. I'm using Criteria em my project.
The way to do this is $unwind, this will make 1 copy of the document, for each member of the array.
Test code here
db.collection.aggregate([
{
"$unwind": {
"path": "$user.adress"
}
},
{
"$set": {
"user": {
"$mergeObjects": [
"$user",
"$user.adress"
]
}
}
},
{
"$unset": [
"user.adress"
]
},
{
"$sort": {
"createDate": 1
}
},
{
"$limit": 10
}
])
Edit1 (the above is if user is a field, if it was the name of the collection)
$$ROOT is a system variable that has as value all the document
Test code here
Query
db.collection.aggregate([
{
"$unwind": {
"path": "$adress"
}
},
{
"$replaceRoot": {
"newRoot": {
"$mergeObjects": [
"$$ROOT",
"$adress"
]
}
}
},
{
"$unset": [
"adress"
]
},
{
"$sort": {
"createDate": 1
}
},
{
"$limit": 10
}
])
I am trying to convert a JSON File into CSV but I don't seem to have any luck in doing so. My JSON looks something like that:
...
{
{"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "1",
},
"id": {
"value": "10000"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "2",
},
"id": {
"value": "10001"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
.. much more data
I basically want my csv to look like this:
number,id
1,10000
2,10001
My flow looks like this:
GetFile -> Set the output-file name -> ConvertRecord -> UpdateAttribute -> PutFile
ConvertRecord uses the JsonTreeReader and a CSVRecordSetWriter
JsonTreeReader
CsvRecordSetWriter.
They both call on an AvroSchemaRegistry which looks like this:
AvroSchemaRegistry
The AvroSchema itself looks like this:
{
"type": "record",
"name": "head",
"fields":
[
{"name": "number", "type": ["string"]},
{"name": "id", "type": ["string"]},
]
}
But I only get this output:
number,id
,
Which makes sense because I'm not specifically indicating where those values are located. I used the JsonPathReader instead before but it only looked like this:
JsonPathReader
Which obvioulsy only gave me one record. I'm not really sure how I can configure either of the two to output exactly what I want. Help would be much appreciated!
Using ConvertRecord for JSON -> CSV is mostly intended for "flat" JSON files where each field in the object becomes a column in the outgoing CSV file. For nested/complex structures, consider JoltConvertRecord, it allows you to do more complex transformations. Your example doesn't appear to be valid JSON as-is, but assuming you have something like this as input:
{
"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "1"
},
"id": {
"value": "10000"
}
}
}
},
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "2"
},
"id": {
"value": "10001"
}
}
}
}
]
}
}
}
The following JOLT spec should give you what you want for output:
[
{
"operation": "shift",
"spec": {
"content": {
"data": {
"erpDataObjects": {
"*": {
"head": {
"fields": {
"number": {
"value": "[&4].number"
},
"id": {
"value": "[&4].id"
}
}
}
}
}
}
}
}
}
]
Good day
I am creating custom visualization on d3js and pbiviz for powerbi
Here is the code in capabilities.js:
{
"dataRoles":[
{
"displayName": "HoleDepth",
"name": "depth",
"kind": "Grouping"
},
{
"displayName": "Days",
"name": "days",
"kind": "Measure"
},
{
"displayName": "Diametrs",
"name": "diametrs",
"kind": "Measure"
},
{
"displayName": "Sensor1",
"name": "sensor_1",
"kind": "Measure"
},
{
"displayName": "Sensor2",
"name": "sensor_2",
"kind": "Measure"
},
{
"displayName": "Sensor3",
"name": "sensor_3",
"kind": "Measure"
},
{
"displayName": "Sensor4",
"name": "sensor_4",
"kind": "Measure"
}
],
"dataViewMappings": [
{
"categorical": {
"categories": {
"for": { "in": "depth" }
},
"values": {
"select":[
{ "bind": { "to": "days" } },
{ "bind": { "to": "diametrs" } },
{ "bind": { "to": "sensor_1" } },
{ "bind": { "to": "sensor_2" } },
{ "bind": { "to": "sensor_3" } },
{ "bind": { "to": "sensor_4" } }
]
}
}
}
]
}
But in visualization it is inconvenient to use categorical -> values array
Is it possible to categorical -> values
was like an object with keys?
I do not think that this is possible directly through data mapping. What I usually do if I want to have data prepared in the specific format, convenient for visualization with d3.js, is the custom function that transforms the data from VisualUpdateOptions.
Then I call this function inside public update(options: VisualUpdateOptions)
I use gatsby-source-strapi to gather some data from strapi in gatsby
i need to change the data structure and generate a new set of source for my staticquery,
how do i achieve the expected result below with gatsby graphql function? (createResolvers, sourceNode, createSchemaCustomization)
GIVEN that i dont wanna to change the database structure
e.g original query
query MyQuery {
allStrapiAppPermissions {
edges {
node {
name
markets {
code
}
}
}
}
}
result that generate by above query
{
"data": {
"allStrapiAppPermissions": {
"edges": [
{
"node": {
"name": "PERMISSION_MYACC_OVERVIEW",
"markets": [
{
"code": "hk"
},
{
"code": "th"
}
]
}
},
{
"node": {
"name": "PERMISSION_MYACC_UPDATE",
"markets": [
{
"code": "hk"
}
]
}
}
]
}
}
}
e.g expected result that i wanna to achieve
{
"data": {
"all_Concated_StrapiAppPermissions": {
"edges": [
{
"node": {
"markets": {"code": "hk"},
"permissions": [
{
"name": "PERMISSION_MYACC_UPDATE"
},
{
"name": "PERMISSION_MYACC_OVERVIEW"
}
]
}
},
{
"node": {
"markets": {"code": "th"},,
"permissions": [
{
"name": "PERMISSION_MYACC_OVERVIEW"
}
]
}
}
]
}
}
}
I'm trying to parse out specifc subnet names in the following piece of json, while using contains_with or starts_with filters in json_query.
It contains two vnets each of which has multiple subnets:
{
"azure_virtualnetworks": [
{
"name": "test-vnet-172-17-0-0-19",
"properties": {
"subnets": [
{
"name": "test-confluent-subnet-172-17-0-0-28",
"properties": {
"addressPrefix": "172.20.88.0/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-confluent-rg/providers/Microsoft.Network/networkSecurityGroups/test-confluent-nsg"
},
"provisioningState": "Succeeded"
}
},
{
"name": "test-test-subnet-172-17-0-32-28",
"properties": {
"addressPrefix": "172.20.88.32/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-test-rg/providers/Microsoft.Network/networkSecurityGroups/test-test-nsg"
},
"provisioningState": "Succeeded"
}
}
]
}
},
{
"name": "test2-vnet-172-17-1-0-19",
"properties": {
"subnets": [
{
"name": "test-confluent-subnet-172-17-1-0-28",
"properties": {
"addressPrefix": "172.20.88.0/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-confluent-rg/providers/Microsoft.Network/networkSecurityGroups/test-confluent-nsg"
},
"provisioningState": "Succeeded"
}
},
{
"name": "test-qatesting-subnet-172-17-1-16-28",
"properties": {
"addressPrefix": "172.20.88.16/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-qatesting-rg/providers/Microsoft.Network/networkSecurityGroups/test-qatesting-nsg"
},
"provisioningState": "Succeeded"
}
}
]
}
}
]
}
I need to search for a subnet name after searching by virtual network name.
I can filter as far down as the list of subnets without problems. e.g
azure_virtualnetworks[?contains(name,`test2-vnet`)].properties.subnets[]
returns:
[
{
"name": "test-confluent-subnet-172-17-1-0-28",
"properties": {
"addressPrefix": "172.20.88.0/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-confluent-rg/providers/Microsoft.Network/networkSecurityGroups/test-confluent-nsg"
},
"provisioningState": "Succeeded"
}
},
{
"name": "test-qatesting-subnet-172-17-1-16-28",
"properties": {
"addressPrefix": "172.20.88.16/28",
"networkSecurityGroup": {
"id": "/subscriptions/********/resourceGroups/test-qatesting-rg/providers/Microsoft.Network/networkSecurityGroups/test-qatesting-nsg"
},
"provisioningState": "Succeeded"
}
}
]
However I'm having problems then searching the subnets. I had thought that some variation on following might work but haven't had any sucess:
azure_virtualnetworks[?contains(name,`test2-vnet`)].properties.subnets[?contains(name,`test-confluent`) ]
I'm struggling to figure out what the correcting syntax is here.
Select required subnets, stop projection with pipe expression, filter required items from the subnets list:
azure_virtualnetworks[?contains(name,`test2-vnet`)].properties.subnets[] | [?contains(name,`test-confluent`)]