Data view mapping in Power BI visuals - d3.js

Good day
I am creating custom visualization on d3js and pbiviz for powerbi
Here is the code in capabilities.js:
{
"dataRoles":[
{
"displayName": "HoleDepth",
"name": "depth",
"kind": "Grouping"
},
{
"displayName": "Days",
"name": "days",
"kind": "Measure"
},
{
"displayName": "Diametrs",
"name": "diametrs",
"kind": "Measure"
},
{
"displayName": "Sensor1",
"name": "sensor_1",
"kind": "Measure"
},
{
"displayName": "Sensor2",
"name": "sensor_2",
"kind": "Measure"
},
{
"displayName": "Sensor3",
"name": "sensor_3",
"kind": "Measure"
},
{
"displayName": "Sensor4",
"name": "sensor_4",
"kind": "Measure"
}
],
"dataViewMappings": [
{
"categorical": {
"categories": {
"for": { "in": "depth" }
},
"values": {
"select":[
{ "bind": { "to": "days" } },
{ "bind": { "to": "diametrs" } },
{ "bind": { "to": "sensor_1" } },
{ "bind": { "to": "sensor_2" } },
{ "bind": { "to": "sensor_3" } },
{ "bind": { "to": "sensor_4" } }
]
}
}
}
]
}
But in visualization it is inconvenient to use categorical -> values array
Is it possible to categorical -> values
was like an object with keys?

I do not think that this is possible directly through data mapping. What I usually do if I want to have data prepared in the specific format, convenient for visualization with d3.js, is the custom function that transforms the data from VisualUpdateOptions.
Then I call this function inside public update(options: VisualUpdateOptions)

Related

Logicapp Expression to read Dynamic Json path - read child element where parent path may change but hierarchy remaining same

Hope all well.
I am in need of creating logicapp expression for reading child element in json where name of element & hierarchy remains same but parent name can be changing.
for example : JSON-1 :
{
"root": {
"abc1": {
"abc2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
JSON-2 :
{
"root": {
"xyz1": {
"xyz2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
I have tried these but no luck
approach-1: #{body('previous-action')?['']?['']?['element']
approach-2: #{body('previous-action')???['element']
Please let me know if anyone encountered this situation. Many thanks in advance.
I tend to find that converting the JSON to xml (at least in your case) is the simplest solution. Then when you've done that, you can't use XPath to simply make your selection.
Flow
In basic terms ...
I've defined a variable of type object that contains your JSON.
I then convert that JSON object to XML using this expression xml(variables('JSON Object'))
Next, initialize a variable is called Elements of type array (given you have multiple of them). The expression for setting that variable is where the smarts come in. That expression is ... xpath(xml(variables('XML')), '//element/text()') and it's getting the inner text of all element nodes in the XML.
Finally, loop through the results.
If you needed to take it up a level and get the second element then you'd need to change your xpath query to be a lot more generic so you can get the element2 nodes (and 3, 4, 5, etc. if they existed) in each array as well.
Note: I've stuck to your specific question of looking for element.
Result
This definition (which can be loaded directly into your tenant) demonstrates the thinking ...
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"For_Each_Element": {
"actions": {
"Set_Element": {
"inputs": {
"name": "Element",
"value": "#{item()}"
},
"runAfter": {},
"type": "SetVariable"
}
},
"foreach": "#variables('Elements')",
"runAfter": {
"Initialize_Element": [
"Succeeded"
]
},
"type": "Foreach"
},
"Initialize_Element": {
"inputs": {
"variables": [
{
"name": "Element",
"type": "string"
}
]
},
"runAfter": {
"Initialize_Elements": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Initialize_Elements": {
"inputs": {
"variables": [
{
"name": "Elements",
"type": "array",
"value": "#xpath(xml(variables('XML')), '//element/text()')"
}
]
},
"runAfter": {
"Initialize_XML": [
"Succeeded"
]
},
"type": "InitializeVariable"
},
"Initialize_JSON_Object": {
"inputs": {
"variables": [
{
"name": "JSON Object",
"type": "object",
"value": {
"root": {
"abc1": {
"abc2": [
{
"element": "value1",
"element2": "value"
},
{
"element": "value2",
"element2": "valu2"
}
]
}
}
}
}
]
},
"runAfter": {},
"type": "InitializeVariable"
},
"Initialize_XML": {
"inputs": {
"variables": [
{
"name": "XML",
"type": "string",
"value": "#{xml(variables('JSON Object'))}"
}
]
},
"runAfter": {
"Initialize_JSON_Object": [
"Succeeded"
]
},
"type": "InitializeVariable"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {
"ParameterTest1": {
"defaultValue": "\"\"",
"type": "String"
}
},
"triggers": {
"manual": {
"inputs": {
"method": "GET",
"schema": {}
},
"kind": "Http",
"type": "Request"
}
}
},
"parameters": {}
}

How in strapi graphql I can pull records from a given month

Hi I would like to draw from graphql only those records whose date is equal to the month - August
If I want to pull another month, it is enough to replace it only in the query. At the moment, my query takes all the months instead of the ones it gives inside the filter
schema.json
{
"kind": "collectionType",
"collectionName": "product_popularities",
"info": {
"singularName": "product-popularity",
"pluralName": "product-popularities",
"displayName": "Popularity",
"description": ""
},
"options": {
"draftAndPublish": true
},
"pluginOptions": {},
"attributes": {
"podcast": {
"type": "relation",
"relation": "manyToOne",
"target": "api::product.products",
"inversedBy": "products"
},
"value": {
"type": "integer"
},
"date": {
"type": "date"
}
}
}
My query
query {
Popularities(filters: {date: {contains: [2022-08]}}) {
data {
attributes {
date
value
}
}
}
}
Response
{
"data": {
"Popularities": {
"data": [
{
"attributes": {
"date": "2022-08-03",
"value": 50
}
},
{
"attributes": {
"date": "2022-08-04",
"value": 1
}
},
{
"attributes": {
"date": "2022-08-10",
"value": 100
}
},
{
"attributes": {
"date": "2022-07-06",
"value": 20
}
}
]
}
}
}

Extract value of array and add in the same select mongoDB

I am new to the mongoDB aggregation and I have this situation. I have this Json and I need to convert by "select" this object:
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"adress": [
{
"localization": "DEATH STAR",
"createDate": 1627990848665
},
{
"localization": "TATOOINE",
"createDate": 1627990555665
},
]
}
this way:
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"localization": "DEATH STAR",
"createDate": 1627990848665
},
{
"type": "PF",
"code": 12345
"Name": Darth Vader,
"currency": "BRL",
"status": "SINGLE",
"localization": "TATOOINE",
"createDate": 1627990555665
}
So, after my query is complete, I will have 02 objects instead of 01. How can I do this?
I would like to do this via select because after converting I will sort by createDate and limit the number of records to return to the API. I'm using Criteria em my project.
The way to do this is $unwind, this will make 1 copy of the document, for each member of the array.
Test code here
db.collection.aggregate([
{
"$unwind": {
"path": "$user.adress"
}
},
{
"$set": {
"user": {
"$mergeObjects": [
"$user",
"$user.adress"
]
}
}
},
{
"$unset": [
"user.adress"
]
},
{
"$sort": {
"createDate": 1
}
},
{
"$limit": 10
}
])
Edit1 (the above is if user is a field, if it was the name of the collection)
$$ROOT is a system variable that has as value all the document
Test code here
Query
db.collection.aggregate([
{
"$unwind": {
"path": "$adress"
}
},
{
"$replaceRoot": {
"newRoot": {
"$mergeObjects": [
"$$ROOT",
"$adress"
]
}
}
},
{
"$unset": [
"adress"
]
},
{
"$sort": {
"createDate": 1
}
},
{
"$limit": 10
}
])

Any idea how to do custom supportedCookingModes in Alexa discovery?

I'm trying to return a Discovery Response, but the supportedCookingModes only seems to accept standard values and only in the format of ["OFF","BAKE"], not Custom values as indicated by the documentation. Any idea on how to specify custom values?
{
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "Discover.Response",
"payloadVersion": "3",
"messageId": "asdf"
},
"payload": {
"endpoints": [
{
"endpointId": "asdf",
"capabilities": [
{
"type": "AlexaInterface",
"interface": "Alexa.Cooking",
"version": "3",
"properties": {
"supported": [
{
"name": "cookingMode"
}
],
"proactivelyReported": true,
"retrievable": true,
"nonControllable": false
},
"configuration": {
"supportsRemoteStart": true,
"supportedCookingModes": [
{
"value": "OFF"
},
{
"value": "BAKE"
},
{
"value": "CUSTOM",
"customName": "FANCY_NANCY_MODE"
}
]
}
}
]
}
]
}
}
}
Custom cooking modes are brand specific. This functionality is not yet publicly available. I recommend you to choose one of the existing cooking modes:
https://developer.amazon.com/en-US/docs/alexa/device-apis/cooking-property-schemas.html#cooking-mode-values

NiFi Convert JSON to CSV via JsonPathReader or JsonTreeReader

I am trying to convert a JSON File into CSV but I don't seem to have any luck in doing so. My JSON looks something like that:
...
{
{"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "1",
},
"id": {
"value": "10000"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
"fields": {
"number": {
"value": "2",
},
"id": {
"value": "10001"
},
}
}
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000",
},
"head": {
.. much more data
I basically want my csv to look like this:
number,id
1,10000
2,10001
My flow looks like this:
GetFile -> Set the output-file name -> ConvertRecord -> UpdateAttribute -> PutFile
ConvertRecord uses the JsonTreeReader and a CSVRecordSetWriter
JsonTreeReader
CsvRecordSetWriter.
They both call on an AvroSchemaRegistry which looks like this:
AvroSchemaRegistry
The AvroSchema itself looks like this:
{
"type": "record",
"name": "head",
"fields":
[
{"name": "number", "type": ["string"]},
{"name": "id", "type": ["string"]},
]
}
But I only get this output:
number,id
,
Which makes sense because I'm not specifically indicating where those values are located. I used the JsonPathReader instead before but it only looked like this:
JsonPathReader
Which obvioulsy only gave me one record. I'm not really sure how I can configure either of the two to output exactly what I want. Help would be much appreciated!
Using ConvertRecord for JSON -> CSV is mostly intended for "flat" JSON files where each field in the object becomes a column in the outgoing CSV file. For nested/complex structures, consider JoltConvertRecord, it allows you to do more complex transformations. Your example doesn't appear to be valid JSON as-is, but assuming you have something like this as input:
{
"meta": {
"contentType": "Response"
},
"content": {
"data": {
"_type": "ObjectList",
"erpDataObjects": [
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "1"
},
"id": {
"value": "10000"
}
}
}
},
{
"meta": {
"lastModified": "2020-08-10T08:37:21.000+0000"
},
"head": {
"fields": {
"number": {
"value": "2"
},
"id": {
"value": "10001"
}
}
}
}
]
}
}
}
The following JOLT spec should give you what you want for output:
[
{
"operation": "shift",
"spec": {
"content": {
"data": {
"erpDataObjects": {
"*": {
"head": {
"fields": {
"number": {
"value": "[&4].number"
},
"id": {
"value": "[&4].id"
}
}
}
}
}
}
}
}
}
]

Resources