I want to display min, max and average of the same data field. Tried the designer on demo page and it is not possible to add same column into Values list, nor is it possible to check multiple aggregation functions in the dropdown.
I tried to edit report JSON manually, but this doesn't seam to be supported:
"measures": [
{
"uniqueName": "myvalue",
"aggregation": "min"
},
{
"uniqueName": "myvalue",
"aggregation": "max"
},
You can add multiple aggregations with the calculated values option, e.x.
"measures":[
{
"uniqueName": "Min myValue",
"formula": "min('myValue')",
},
{
"uniqueName": "Max myValue",
"formula": "max('myValue')",
}
Also, you can find the "Add calculated value" button on the top of the Fields List window. It allows creating calculated values in runtime using UI tools.
You can find more examples with calculated values here
Related
I have some json data that I would like to filter in a Power Automate Flow.
A simplified version of the json is as follows:
[
{
"ItemId": "1",
"Blah": "test1",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "A"
},
{
"Name": "Category",
"Value": "Test"
}
]
},
{
"ItemId": "2",
"Blah": "test2",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "B"
},
{
"Name": "Category",
"Value": "Test"
}
]
}
]
For example, I wish to filter items based on Name = "Code" and Value = "A". I should be left with the item with ItemId 1 in that case.
I can't figure out how to do this in Power Automate. It would be nice to change the data structure, but this is the way the data is, and I'm trying to work out if this is possible in Power Automate without changing the data itself.
Firstly, I had to fix your JSON, it wasn't complete.
Secondly, filtering on sub array information isn't what I'd call easy. However, to get around the limitations, you can perform a bit of trickery.
Prior to the step above, I create a variable of type Array and called it Array.
In the step above, the left hand side expression is ...
string(item()?['CustomFieldArray'])
... and the contains comparison on the right hand side is simply as you can see, a string with the appropriate filter value ...
{"Name":"Code","Value":"A"}
... it's not an expression or a proper object, just a string.
If you need to enhance it to cater for case sensitive values, just set everything to lower case using the toLower expression on the left.
Although it's hard to see, that will produce your desired result ...
... you can see by the vertical scrollbars that it's reduced the size of the array.
In my elastic-search indexed eCommerce application, One product has multiple suppliers and each supplier may have their own original price and discount price etc.
If the user searches for the product i need to show lowest price supplier details(in search result page as well as product detail page).
How can i prepare the elasticsearch multimatch query to fetch relevant records with price/offerPrice in ascending order?
Is there any better design than this?
I have created the index in Elastic search in following nested object format
{
"skuId": "100",
"skuName": "I-Phone",
"Sellers": {
"seller": [
{
"Supplier": {
"SupplierId": 1,
"supplierAlias": "X1",
"supplierDesc": "X1"
},
"price": 10,
"offerPrice": 8
},
{
"Supplier": {
"SupplierId": 2,
"supplierAlias": "X2",
"supplierDesc": "X2"
},
"price": 9,
"offerPrice": null
}
]
}
}
if you need order seller array in you index, you have two solution
1st you can use any programming language to order this array after find your target document.
2nd you can re-index the data with separate documents to make full control on it.
I want to be able to return a set of counts of individual documents from a single index based on a previous set of results, and am wondering if there is a way to do it without running a separate query for each.
So, given a data set like this (simplified version of my ES documents):
{
"name": "visit",
"sessionId": "session1"
},
{
"name": "visit",
"sessionId": "session2"
},
{
"name": "visit",
"sessionId": "session3"
},
{
"name": "click",
"sessionId": "session1"
},
{
"name": "click",
"sessionId": "session3"
}
What I would like to do is be able to search for name: visit and give a count of all those. That part is easy. But I would also like to be able to now count my name: click docs that have the sessionId of the name: visit result set and return a count of how many of those name: click there were as well as the name: visit.
Is there an easy way to do this? I have looked at aggregation APIs but they all seem to not quite fit my needs. There also seems to be a parent/child relationship but it doesn't apply to my situation since both documents I want to individually get counts of are of the same type.
Expected result would be something like this:
{
"count": {
// total number of visit events since this is my start point
"visit": 3,
// the amount of click results that have sessionId
// matching my previous search's sessionId values
"click": 2
}
}
At first glance, you need to do this in two queries:
the first aggregation query to retrieve the sessionIds and
a second aggregation query filtered with those sessionIds to find the count of clicks.
I don't think it's a big deal to run those two queries, but that depends on how much data you have and how many sessionIds you want to retrieve at once.
So I am using this approach on CouchDB docs to perform pagination.
Request rows_per_page + 1 rows from the view
Display rows_per_page rows, store + 1 row as next_startkey and next_startkey_docid
As page information, keep startkey and next_startkey
Use the next_* values to
create the next link, and use the others to create the previous link
One thing I don't understand is, how do I perform sorting using this approach, assuming each document have a last updated timestamp and I want to sort using that field instead of sorting using ids.
First of all, sorting will always be on the KEYS.
Querying _all_docs result by query a table where the key is the _id.
[
{
"key": "my_first_id",
"value": {}
},
{
"key": "my_second_id",
"value": {}
}
]
So if you want to sort on another field than _id, you will need to use Map/Reduce(Views) For example, you could create a view where the key is the updatedAt field.
This would result in something like this :
[
{
"key": "1475858068",
"value": {}
},
{
"key": "1475553268",
"value": {}
}
]
So using the sort would result by sorting the key :)
An ElasticSearch index contains a Product entity. Each product has an array of Components entities.
A component may contain an optional outOfStock field.
Given the following example:
"Product":
"name": "blue_toy"
"Components": [
{
"partnumber": "100"
"supplier": "smith and sons"
"outOfStock": "true"
}
{
"partnumber": "200"
"supplier": "smith and sons"
}]
}
"Product":
"name": "green_toy"
"Components": [
{
"partnumber": "300"
"supplier": "smith and sons"
}]
}
blue_toy cannot be built because one part is unavailable.
I want to show in a chart how many products cannot be build, as opposed to the number which can be built.
Given that if even one component is unavailable the entire product cannot be built, in the above example to distribution would be 50% - 50%.
Note that this is different than how many components of the total set are are of stock (which would be 33% - 66%).
In essense, the question is how to mark or flag a root entity based on the contents of one of its nested entities.
How could one do this in Kibana?
Thanks
I dont know if it will fit in your exemple but i once did have a similar problem which I solved with the "copy_to" parameter.
In your exemple, you have to change the mapping of Product to add a "copy_to" to your "outOfStock" field.
it'll create a field (with a specified name) in the root document with your "outOfStock" value.
This field will be add at indexing time and you can say that if the field created by the "copy_to" is "true" then the Product cannot be built.
See: https://www.elastic.co/guide/en/elasticsearch/reference/1.4/mapping-core-types.html