Elasticsearch order aggregations bucket based on a field (can be text/string) - elasticsearch

My document has a category id.
This is my aggregation query:
"aggs": {
"categories": {
"filter": {
"bool": {
"must": [
{
"exists": {
"field": "price"
}
}
]
}
},
"aggs": {
"categories": {
"terms": {
"field": "category_id",
"order": {
"_count": "desc"
},
"size": 15
}
}
}
}
It produces the following results:
"categories" : {
"doc_count" : 92485,
"categories" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 4780,
"buckets" : [ {
"key" : 5053,
"doc_count" : 21827
}, {
"key" : 5413,
"doc_count" : 15760
}, {
"key" : 5057,
"doc_count" : 12473
}, {
"key" : 77978,
"doc_count" : 11388
}, {
"key" : 5030,
"doc_count" : 9898
}, {
"key" : 5055,
"doc_count" : 2492
}, {
"key" : 8543,
"doc_count" : 2461
}, {
"key" : 5684,
"doc_count" : 2106
}, {
"key" : 5050,
"doc_count" : 2001
}, {
"key" : 8544,
"doc_count" : 1803
}, {
"key" : 5049,
"doc_count" : 1635
}, {
"key" : 5054,
"doc_count" : 1284
}, {
"key" : 5035,
"doc_count" : 977
}, {
"key" : 8731,
"doc_count" : 817
}, {
"key" : 8732,
"doc_count" : 783
} ]
}
}
Is it possible to get the response such that buckets are ordered by category_id or any other field post bucketing as I want to select only 15 such buckets with maximum doc_count.
Also if possible is there a way do it based on a field which is text/string.
I tried sub-aggregation but couldn't figure it out.

Related

es cumulative_sum cannot support the number of returned docs

I got a confusion that how to specify the number of returned docs from cumulative_sum aggs, this is my search:
{
"query": {"match_all": {}},
"size": 0,
"aggs": {
"group_by_date": {
"date_histogram": {
"field": "timestamp",
"interval": "day"
},
"aggs": {
"cumulative_docs": {
"cumulative_sum": {"buckets_path": "_count"}
}
}
}
}
}
and it returns max number of buckets
"aggregations" : {
"group_by_date" : {
"buckets" : [
{
"key_as_string" : "2022-09-03T00:00:00.000Z",
"key" : 1662163200000,
"doc_count" : 19,
"cumulative_docs" : {
"value" : 19.0
}
},
{
"key_as_string" : "2022-09-04T00:00:00.000Z",
"key" : 1662249600000,
"doc_count" : 0,
"cumulative_docs" : {
"value" : 19.0
}
},
{
"key_as_string" : "2022-09-05T00:00:00.000Z",
"key" : 1662336000000,
"doc_count" : 0,
"cumulative_docs" : {
"value" : 19.0
}
},
{
"key_as_string" : "2022-09-06T00:00:00.000Z",
"key" : 1662422400000,
"doc_count" : 0,
"cumulative_docs" : {
"value" : 19.0
}
},
{
"key_as_string" : "2022-09-07T00:00:00.000Z",
"key" : 1662508800000,
"doc_count" : 0,
"cumulative_docs" : {
"value" : 19.0
}
},
{
"key_as_string" : "2022-09-08T00:00:00.000Z",
"key" : 1662595200000,
"doc_count" : 0,
"cumulative_docs" : {
"value" : 19.0
}
},
...
I tried to use bucket_selector to filter top10 or N in cumulative_sum but its return error such like can not support sub aggs in cumulative_sum, and also tried to use size param but not support.
if I wanna return only ten or more(I can specify it myself), how can I revise my code here?

Aggregating all fields for an object in a search query, without manually specifying the fields

I have an index products which has an internal object attributes which looks like:
{
properties: {
id: {...},
name: {...},
colors: {...},
// remaining fields
}
}
I'm trying to produce a search query with this form and I need to figure out how to write the aggs object.
{ query: {...}, aggs: {...} }
I can write this out manually for two fields to get the desired result, however the object contains 50+ fields so I need it to be able to handle it automatically
"aggs": {
"attributes.color_group.id": {
"terms": {
"field": "attributes.color_group.id.keyword"
}
},
"attributes.product_type.id": {
"terms": {
"field": "attributes.product_type.id.keyword"
}
}
}
Gives me the result:
"aggregations" : {
"attributes.product_type.id" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 34,
"buckets" : [
{
"key" : "374",
"doc_count" : 203
},
{
"key" : "439",
"doc_count" : 79
},
{
"key" : "460",
"doc_count" : 28
},
{
"key" : "451",
"doc_count" : 24
},
{
"key" : "558",
"doc_count" : 18
},
{
"key" : "500",
"doc_count" : 10
},
{
"key" : "1559",
"doc_count" : 9
},
{
"key" : "1560",
"doc_count" : 9
},
{
"key" : "455",
"doc_count" : 7
},
{
"key" : "501",
"doc_count" : 6
}
]
},
"attributes.color_group.id" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 35,
"buckets" : [
{
"key" : "12",
"doc_count" : 98
},
{
"key" : "54",
"doc_count" : 48
},
{
"key" : "118",
"doc_count" : 43
},
{
"key" : "110",
"doc_count" : 41
},
{
"key" : "111",
"doc_count" : 35
},
{
"key" : "71",
"doc_count" : 35
},
{
"key" : "119",
"doc_count" : 24
},
{
"key" : "62",
"doc_count" : 21
},
{
"key" : "115",
"doc_count" : 20
},
{
"key" : "113",
"doc_count" : 15
}
]
}
}
Which is exactly what I want. After some research I found that you can use query_string which would allow me to find everything starting with attributes., however it does not seem to work inside aggregations.
As I know what you are asking is not possible with inbuild functionality of elasticsearch. But there are some work around you can do like:
Use Search Template:
Below is Example for Search Template, where you will provide list of field as array and it will create the aggregation for all provided fields. you can store search template using Script API and use id of search template while calling search request.
POST dyagg/_search/template
{
"source": """{
"query": {
"match_all": {}
},
"aggs": {
{{#filter}}
"{{.}}": {
"terms": {
"field": "{{.}}",
"size": 10
}
}, {{/filter}}
"name": {
"terms": {
"field": "name",
"size": 10
}
}
}
}""",
"params": {
"filter":["lastname","firstname","city","country"]
}
}
Response:
"aggregations" : {
"country" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "India",
"doc_count" : 4
}
]
},
"firstname" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Rajan",
"doc_count" : 1
},
{
"key" : "Sagar",
"doc_count" : 1
},
{
"key" : "Sajan",
"doc_count" : 1
},
{
"key" : "Sunny",
"doc_count" : 1
}
]
},
"city" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Mumbai",
"doc_count" : 2
},
{
"key" : "Pune",
"doc_count" : 2
}
]
},
"name" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Rajan Desai",
"doc_count" : 1
},
{
"key" : "Sagar Patel",
"doc_count" : 1
},
{
"key" : "Sajan Patel",
"doc_count" : 1
},
{
"key" : "Sunny Desai",
"doc_count" : 1
}
]
},
"lastname" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "Desai",
"doc_count" : 2
},
{
"key" : "Patel",
"doc_count" : 2
}
]
}
}
Second way is using programming. Please check this stackoverflow answer where they have mentioned about how to do in PHP so same you can follow for other language.
NOTE:
If you noticed search template, I have added one static aggregation for name field and reason for adding is to avoid extra comma in the end of for loop complete. If you not add then you will get json_parse_exception.

Aggregate by custom defined buckets, according to field value

I'm interested in aggregating my data into buckets, but I want to put two distinct values to the same bucket.
This is what I mean:
Say I have this query:
GET _search
{
"size": 0,
"aggs": {
"my-agg-name": {
"terms": {
"field": "ecs.version"
}
}
}
}
it returns this response:
"aggregations" : {
"my-agg-name" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "1.12.0",
"doc_count" : 642826144
},
{
"key" : "8.0.0",
"doc_count" : 204064845
},
{
"key" : "1.1.0",
"doc_count" : 16508253
},
{
"key" : "1.0.0",
"doc_count" : 9162928
},
{
"key" : "1.6.0",
"doc_count" : 1111542
},
{
"key" : "1.5.0",
"doc_count" : 10445
}
]
}
}
every distinct value of the field ecs.version is in it's own bucket.
But say I wanted to define my buckets such that:
bucket1: [1.12.0, 8.0.0]
bucket2: [1.6.0, 8.4.0]
bucket3: [1.0.0, 8.8.0]
Is this possible in anyway?
I know I can just return all the buckets and do the sum programmatically, but this list can be very long, I don't think it would be efficient. Am I wrong?
You can use Runtime Mapping to generat runtime field and that field will be use for aggregation. I have done below exmaple on ES 7.16.
I have index some of the sample document and below is aggregation output without join on multipul values:
"aggregations" : {
"version" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "1.12.0",
"doc_count" : 3
},
{
"key" : "1.6.0",
"doc_count" : 3
},
{
"key" : "8.4.0",
"doc_count" : 3
},
{
"key" : "8.0.0",
"doc_count" : 2
}
]
}
}
You can use below query with runtime mapping but you need to add multipul if condition for your version mappings:
{
"size": 0,
"runtime_mappings": {
"normalized_version": {
"type": "keyword",
"script": """
String version = doc['version.keyword'].value;
if (version.equals('1.12.0') || version.equals('8.0.0')) {
emit('1.12.0, 8.0.0');
} else if (version.equals('1.6.0') || version.equals('8.4.0')){
emit('1.6.0, 8.4.0');
}else {
emit(version);
}
"""
}
},
"aggs": {
"genres": {
"terms": {
"field": "normalized_version"
}
}
}
}
Below is output of above aggregation query:
"aggregations" : {
"genres" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "1.6.0, 8.4.0",
"doc_count" : 6
},
{
"key" : "1.12.0, 8.0.0",
"doc_count" : 5
}
]
}
}

How to group by number of documents in buckets in Elasticsearch?

Is there a way to group by the number of documents in buckets after a terms aggregation?
My terms aggregation looks like this:
"aggs": {
"values": {
"terms": {
"field": "number"
}
}
}
The output looks like this:
...
{
"key" : "12345",
"doc_count" : 99
},
{
"key" : "123456",
"doc_count" : 99
},
{
"key" : "112233",
"doc_count" : 6
},
{
"key" : "334455",
"doc_count" : 6
},
{
"key" : "456789",
"doc_count" : 5
}
...
And what I want to have is something like this:
...
{
"doc_count" : "99",
"buckets" : 2
},
{
"doc_count" : "6",
"buckets" : 2
},
{
"doc_count" : "5",
"buckets" : 1
}
...
Is there an easy way in Elasticsearch to get this result?

Can I get a clean data structure with aggregations

I'm trying to create an aggregation but the results are bloated with metadata and not fits my use case.
This is my aggregation definition;
"aggs": {
"attributes": {
"nested": {
"path": "attributes"
},
"aggs": {
"facet_name": {
"terms": {
"field": "attributes.name.keyword"
},
"aggs": {
"facet_value": {
"terms": {
"field": "attributes.value.keyword"
}
}
}
}
}
}
},
I try to get a data structure similar to this;
[{
"name": "Materiał",
"values": ["stal", "drewno"...]
},
{
"name": "Kolor",
"values": ["czarny", "kolorowy"...]
]
Instead of this results set below which is the current aggregation response;
"aggregations" : {
"attributes" : {
"doc_count" : 142307,
"facet_name" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 38074,
"buckets" : [
{
"key" : "Materiał",
"doc_count" : 21811,
"facet_value" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 4977,
"buckets" : [
{
"key" : "stal",
"doc_count" : 3141
},
{
"key" : "drewno",
"doc_count" : 2944
},
{
"key" : "szkło",
"doc_count" : 2885
},
{
"key" : "tworzywo sztuczne",
"doc_count" : 1529
},
{
"key" : "metal",
"doc_count" : 1303
},
...
This is the closest result that I could get.
I couldn't find how to restructure the resulting object or remove the metadata from aggregations.
Unfortuanetly you can not change the structure of the response body to fulfill your desired result. This is just how the Elasticsearch REST API is implemented.
You would have to iterate over the buckets array and create your own structure/object by extracting the particular values.

Resources