I have a power app that using the flow from power automate.
My flow is doing an HTTP get and respond a JSON to power apps like below.
Here is the JSON as text:
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
But when I try to convert this JSON to the collection, It doesn't behave like a list.
It just seems like a text. Here is how I try to bind the list.
How can I create a collection from JSON to bind to the gallery view?
I found the solution. I finally create a collection from the response of flow.
The flow's name is GetVendor.
The response of flow is like this :
{"value": "[{\"dataAreaId\":\"mv\",\"AccountNum\":\"100000\",\"Name\":\"*****L FOOD AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100001\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100014\",\"Name\":\"****(SEB)\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100021\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"100029\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500100\",\"Name\":\"**** AB\"},{\"dataAreaId\":\"mv\",\"AccountNum\":\"500210\",\"Name\":\"****\"}]"}
Below code creates a list from this response :
ClearCollect(_vendorData, MatchAll(GetVendors.Run(_token.value).value, "\{""dataAreaId"":""(?<dataAreaId>[^""]*)"",""AccountNum"":""(?<AccountNum>[^""]*)"",""Name"":""(?<Name>[^""]*)""\}"));
And I could bind the accountnum and name from _vendorDatra collection to the gallery view
In my case I had the same issue as you, but couldn't manage to get data into _vendorData collection, because MatchAll regex part was not working correctly, even if I had exactly the same scenario and I could not make it work.
My solution was to modify the flow itself, where I returned Response instead of Respond to a Power app or Flow, so basically I could return full request from Http.
This caused me some issues also, because when I generated schema from sample I could not register the flow to the powerapp with the error Failed during http send request.
The solution was to manually review the response schema and change all column types to one of the following three, because other are not supported: string, integer or boolean. Object and array can be set only on top level items, but never on children, so if you have anything else than my mentioned three, replace it to string. And no property can be left with undefined type.
Basically I like this solution even more, because in powerapps itself you do not need to do any conversion or anything - simply use the data as is, because it is already recognized as collection in case of array and you have all the properties already named for you.
Response step schema example is below.
{
"type": "object",
"properties": {
"PropertyOne": {
"type": "string"
},
"PropertyTwo": {
"type": "integer"
},
"PropertyThree": {
"type": "boolean"
},
"PropertyFour": {
"type": "array",
"items": {
"type": "object",
"properties": {
"PropertyArray1": {
"type": "string"
},
"PropertyArray1": {
"type": "integer"
},
"PropertyArray1": {
"type": "boolean"
}
}
}
It is easy now.
Power Apps introduced ParseJSON function which helps converting string to collection easily.
Table(ParseJSON(JSONString));
In gallery, map columns like - ThisItem.Value.ColumnName
Related
During developing pipeline which will use Elasticsearch as a source I faced with issue related paging. I am using SQL Elasticsearch API. Basically, I've started to do request in postman and it works well. The body of request looks following:
{
"query":"SELECT Id,name,ownership,modifiedDate FROM \"core\" ORDER BY Id",
"fetch_size": 20,
"cursor" : ""
}
After first run in response body it contains cursor string which is pointer to next page. If in postman I send the request and provide cursor value from previous request it return data for second page and so on. I am trying to archive the same result in Azure Data Factory. For this I using copy activity, which store response to Azure blob. Setup for source is following.
copy activity source configuration
This is expression for body
{
"query": "SELECT Id,name,ownership,modifiedDate FROM \"#{variables('TableName')}\" WHERE ORDER BY Id","fetch_size": #{variables('Rows')}, "cursor": ""
}
I have no idea how to correctly setup pagination rule. The pipeline works properly but only for the first request. I've tried to setup Headers.cursor and expression $.cursor but this setup leads to an infinite loop and pipeline fails with the Elasticsearch restriction.
I've also tried to read document at https://learn.microsoft.com/en-us/azure/data-factory/connector-rest#pagination-support but it seems pretty limited in terms of usage examples and difficult for understanding.
Could somebody help me understand how to build the pipeline with paging abilities utilization?
Responce with the cursor looks like:
{
"columns": [
{
"name": "companyId",
"type": "integer"
},
{
"name": "name",
"type": "text"
},
{
"name": "ownership",
"type": "keyword"
},
{
"name": "modifiedDate",
"type": "datetime"
}
],
"rows": [
[
2,
"mic Inc.",
"manufacture",
"2021-03-31T12:57:51.000Z"
]
],
"cursor": "g/WuAwFaAXNoRG5GMVpYSjVWR2hsYmtabGRHTm9BZ0FBQUFBRUp6VGxGbUpIZWxWaVMzcGhVWEJITUhkbmJsRlhlUzFtWjNjQUFBQUFCQ2MwNWhaaVIzcFZZa3Q2WVZGd1J6QjNaMjVSVjNrdFptZDP/////DwQBZgljb21wYW55SWQBCWNvbXBhbnlJZAEHaW50ZWdlcgAAAAFmBG5hbWUBBG5hbWUBBHRleHQAAAABZglvd25lcnNoaXABCW93bmVyc2hpcAEHa2V5d29yZAEAAAFmDG1vZGlmaWVkRGF0ZQEMbW9kaWZpZWREYXRlAQhkYXRldGltZQEAAAEP"
}
I finally find the solution, hopefully, it will be useful for the community.
Basically, what needs to be done it is split the solution into four steps.
Step 1 Make the first request as in the question description and stage file to blob.
Step 2 Read blob file and get the cursor value, set it to variable
Step 3 Keep requesting data with a changed body
{"cursor" : "#{variables('cursor')}" }
Pipeline looks like this:
pipeline
Configuration of pagination looks following
pagination . It is a workaround as the server ignores this header, but we need to have something which allows sending a request in loop.
I'm trying to build a pipeline where Avro data is written into a Postgres DB. Everything works fine with simple schemas and the AvroConverter for the values. However, I would like to have a nested field written into a JSONB column. There are a couple of problems with this. First, it seems that the Connect plugin does not support STRUCT data. Second, the plugin cannot write directly into the JSONB column.
The second problem should be avoided by adding a cast in PG, as described in this issue. The first problem is proving more diffult. I have tried different transformations but have not been able to get the Connect plugin to interpret one complex field as a string. The schema in questions looks something like this (in practice there would be more fields on the first level besides the timestamp):
{
"namespace": "test.schema",
"name": "nested_message",
"type": "record",
"fields": [
{
"name": "timestamp",
"type": "long"
},
{
"name": "nested_field",
"type": {
"name": "nested_field_record",
"type": "record",
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "prop",
"type": "float",
"doc": "Some property"
}
]
}
}
]
}
The message is written in Kafka as
{"timestamp":1599493668741396400,"nested_field":{"name":"myname","prop":377.93887}}
In order to write the contents of nested_field into a single DB column, I would like to interpret this entire field as a string. Is this possible? I have tried the cast transformation, but this only supports prmitive Avro types. Something along the lines of HoistField could work, but I don't see a way to limit this to a single field. Any ideas or advice would be greatly appreciated.
A completely different approach would be to use two connect plugins and UPSERT into the table. One plugin would use the AvroConverter for all fields save the nested one, while the second plugin uses the StringConverter for the nested field. This feels wrong in all kinds of ways though.
I have a set of notification or information items stored in elasticsearch. Once a user has seen a notification I need to mark it as seen by that user. A user can filter documents by read/unread status. Notifications will be viewed by lot of users and seen status will constantly get updated. What is the best way to store this data. Shall I store the list of users which have seen that notification in same document itself or shall I create parent child relationship.
For sure you should avoid parent-child or nested type because they are computationial costful. the best way to achieve the relationship with a lot of data is to denormalize your data and put them in different indices. Please read here and here .Example:
PUT notification
{"mappings": {
"properties": {
"content": {
"type": "text"},
"id_notification":{
"type":"keyword" }{
}}
}
}
Then user index:
PUT user
{"mappings": {
"properties": {
"general_information": {
"type": "text"},
"id_user":{
"type":"keyword" }{
}}
}
}
another index for the relationship:
PUT seen
{"mappings": {
"properties": {
"seen": {
"notification_id":{
"type": "keyword",
"fields":{
"user_id":{
"type":"keyword"}}},
"unseen":{
"notification_id":{
"type": "keyword",
"fields":{
"user_id":{
"type":"keyword"}}}}
}
}
Sorry for the text format, i haven't kibana now. You should pay attention that to pass from information indices - user, notification - to the support index - seen - you should make a multi-index query - doc here. it will works because the name and the values of the fields - user_id , notification_id - are the same in different indices. The subfields user_id in seen index are array of keywords. However you could make user_id a single keyword and parent of notification_id keyword array field. In every case they keep the one to many realtionship, the best choice depends from your data
While using GraphiQL works well, my boss has asked me to implement a user interface where users can check elements presented to them via UI elements like checkbox, map relationships and get the data and doing this will generate a graphql input for the person, call the API and get the result back to the user.
So, basically this involves 2 generations. Generating a user interface from a GraphQL schema and generating a GraphQL input query from the user's selection.
I searched and I was not able to find any tools which already do this. My server is in Node and I am using Express GraphQL. I converted my express schema to GraphQLSchema language using https://github.com/graphql-cli/graphql-cli and I introspected the GraphQLSchema language using the introspect function at https://github.com/sheerun/graphqlviz/blob/master/cli.js
The object which I got was something like this (only partial schema output given below)
`
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"mutationType": {
"name": "Mutation"
},
"subscriptionType": null,
"types": [{
"kind": "OBJECT",
"name": "Query",
"description": null,
"fields": [{
"name": "employee",
"description": null,
"args": [{
"name": "ecode",
"description": null,
"type": {
"kind": "SCALAR",
"name": "String",
"ofType": null
},
"defaultValue": null
}],
`
I am looping through the elements trying to generate UI but I am quite stuck.
What is the best way to do this? Thanks in advance.
Well for the part of generating the ui from the introspection query, I think that the response contains enough data for a sufficient ui (description for each field can be used as a placeholder for each field's input box). If you're asking how can you generate a dynamic form from the introspection response, you can take a look at other projects that created transformers from json to html forms for inspiration/usage (take a look at https://github.com/brutusin/json-forms/blob/master/README.md#cdn). For complex fields (not primitive types) you may need to do some more work.
For the schema generation from the ui, you can use any query builder tool, and build a query according to the user inputs. Each combobox will be mapped to a specific SCHEMANAME.FIELDNAME and the value will be the value of the input box.
I hope it helped a bit. BTW, it sounds like an interesting tool so let us know if you succeed!
I have created an Elasticsearch index from a data set containing geodata. I have set up mapping for the data. Then I tried to create Kibana visualisation using this data set. Kibana detects the geodata property but finds no result even though there plenty of. Then I ran a test on another data set with different and much simpler layout, and Kibana properly visualised geodata.
Here's the sample that works:
"location": {
"lat": 56.290525,
"lon": -30.163298
},
and this is its mapping:
"location": {
"type": "geo_point",
"lat_lon": true,
"geohash": true
}
And this one doesn't work:
"groupOfLocations": {
"#type": "Point",
"locationForDisplay": {
"lat": 59.21232,
"lon": 9.603803
}
}
And this is its mapping:
{
... // nested type
"locationForDisplay": {
"type": "geo_point",
"lat_lon": true,
"geohash": true
}
...
}
There are only two things that are different between working and non-working versions:
The one that works has a JSON element called "location" while the
other one is called "locationForDisplay"
The one that works has a JSON element ("location") as a top level
element, while in the other one it's an element in the nested type.
Apart from these two differences (which I believe shouldn't mean anything) I can't find anything else. What can make Kibana fail?
Kibana can not work with nested Json,
You need to change it to the standard Json.