Elasticsearch sub-aggregation queries that check whether some bucket values meet a condition - elasticsearch

Hiii guys!! I am trying to pull out some aggregations that need to perform some specific computation logics per bucket, and it is killing me..
So I have some data tracking who uses what application feature like this:
[
{
"event_key": "basic_search",
"user": {
"tenant_tier": "free"
},
"origin": {
"visitor_id": "xxxxxxx"
}
},
{
"event_key": "registration",
"user": {
"tenant_tier": "basic"
},
"origin": {
"visitor_id": "xxxxxxx"
}
},
{
"event_key": "advanced_search",
"user": {
"tenant_tier": "basic"
},
"origin": {
"visitor_id": "xxxxxxx"
}
}
]
The user can opt to trial the app features using free tier identity, then register to enjoy other features. The origin.visitor_id is calculated from a website user's IP addresses and User-Agent etc.
With this data, I am hoping to answer this question: "how many people used free trial features BEFORE registering".
I came up with a ES query template like below, but couldn't figure out how to write the sub-aggregations that seem to require some more complex scripting against values in the bucket... Any advice is very much appreciated!
{
"aggs": {
"origin": {
"terms": {
"field": "origin.id.keyword",
"size": 1000
},
"aggs": {
"user_started_out_free": {
# ??????
# need to return a boolean telling whether `user.tenant_tier` of the first document in the bucket is `free`
}
},
"then_registered": {
# ??????
# need to return a boolean telling whether any `event_type` in the bucket is `registration`
},
"is_trial_user_then_registered": {
"bucket_script": {
"buckets_path": {
"user_started_out_free": "user_started_out_free"
"then_registered": "then_registered"
},
"script": "user_started_out_free && then_registered"
}
}
}
},
"num_trial_then_registered": {
"sum_bucket": {
"buckets_path": "origin>is_trial_user_then_registered"
}
}
}
}

You can use bucket selector aggregation to keep bucket where "trail" and "registration" both exists. Then use stats aggregation to get bucket count.
Query
{
"size": 0,
"aggs": {
"visitors": {
"terms": {
"field": "origin.visitor_id.keyword",
"size": 10
},
"aggs": {
"user_started_out_free": {
"filter": {
"term": {
"event_key.keyword": "basic_search"
}
}
},
"then_registered": {
"filter": {
"term": {
"event_key.keyword": "registration"
}
}
},
"user_first_free_then_registerd":{
"bucket_selector": {
"buckets_path": {
"free": "user_started_out_free._count",
"registered": "then_registered._count"
},
"script": "if(params.free>0 && params.registered>0) return true;"
}
}
}
},
"bucketcount":{
"stats_bucket":{
"buckets_path":"visitors._count"
}
}
}
}
Result
"visitors" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "3",
"doc_count" : 4,
"then_registered" : {
"doc_count" : 3
},
"user_started_out_free" : {
"doc_count" : 1
}
},
{
"key" : "1",
"doc_count" : 3,
"then_registered" : {
"doc_count" : 1
},
"user_started_out_free" : {
"doc_count" : 1
}
},
{
"key" : "2",
"doc_count" : 2,
"then_registered" : {
"doc_count" : 1
},
"user_started_out_free" : {
"doc_count" : 1
}
}
]
},
"bucketcount" : {
"count" : 3,
"min" : 2.0,
"max" : 4.0,
"avg" : 3.0,
"sum" : 9.0
}

Related

bucket aggregation/bucket_script computation

How to apply computation using bucket fields via bucket_script? More so, I would like to understand how to aggregate on distinct, results.
For example, below is a sample query, and the response.
What I am looking for is to aggregate the following into two fields:
sum of all buckets dist.value from e.g. response (1+2=3)
sum of all buckets (dist.value x key) from e.g., response (1x10)+(2x20)=50
Query
{
"size": 0,
"query": {
"bool": {
"must": [
{
"match": {
"field": "value"
}
}
]
}
},
"aggs":{
"sales_summary":{
"terms":{
"field":"qty",
"size":"100"
},
"aggs":{
"dist":{
"cardinality":{
"field":"somekey.keyword"
}
}
}
}
}
}
Query Result:
{
"aggregations": {
"sales_summary": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": 10,
"doc_count": 100,
"dist": {
"value": 1
}
},
{
"key": 20,
"doc_count": 200,
"dist": {
"value": 2
}
}
]
}
}
}
You need to use a sum bucket aggregation, which is a pipeline aggregation to find the sum of response of cardinality aggregation across all the buckets.
Search Query for sum of all buckets dist.value from e.g. response (1+2=3):
POST idxtest1/_search
{
"size": 0,
"aggs": {
"sales_summary": {
"terms": {
"field": "qty",
"size": "100"
},
"aggs": {
"dist": {
"cardinality": {
"field": "pageview"
}
}
}
},
"sum_buckets": {
"sum_bucket": {
"buckets_path": "sales_summary>dist"
}
}
}
}
Search Response :
"aggregations" : {
"sales_summary" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : 10,
"doc_count" : 3,
"dist" : {
"value" : 2
}
},
{
"key" : 20,
"doc_count" : 3,
"dist" : {
"value" : 3
}
}
]
},
"sum_buckets" : {
"value" : 5.0
}
}
For the second requirement, you need to first modify the response of value in the bucket aggregation response, using bucket script aggregation, and then use the modified value to perform bucket sum aggregation on it.
Search Query for sum of all buckets (dist.value x key) from e.g., response (1x10)+(2x20)=50
POST idxtest1/_search
{
"size": 0,
"aggs": {
"sales_summary": {
"terms": {
"field": "qty",
"size": "100"
},
"aggs": {
"dist": {
"cardinality": {
"field": "pageview"
}
},
"format-value-agg": {
"bucket_script": {
"buckets_path": {
"newValue": "dist"
},
"script": "params.newValue * 10"
}
}
}
},
"sum_buckets": {
"sum_bucket": {
"buckets_path": "sales_summary>format-value-agg"
}
}
}
}
Search Response :
"aggregations" : {
"sales_summary" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : 10,
"doc_count" : 3,
"dist" : {
"value" : 2
},
"format-value-agg" : {
"value" : 20.0
}
},
{
"key" : 20,
"doc_count" : 3,
"dist" : {
"value" : 3
},
"format-value-agg" : {
"value" : 30.0
}
}
]
},
"sum_buckets" : {
"value" : 50.0
}
}

Count number of inner elements of array property (Including repeated values)

Given I have the following records.
[
{
"profile": "123",
"inner": [
{
"name": "John"
}
]
},
{
"profile": "456",
"inner": [
{
"name": "John"
},
{
"name": "John"
},
{
"name": "James"
}
]
}
]
I want to get something like:
"aggregations": {
"name": {
"buckets": [
{
"key": "John",
"doc_count": 3
},
{
"key": "James",
"doc_count": 1
}
]
}
}
I'm a beginner using Elasticsearch, and this seems to be a pretty simple operation to do, but I can't find how to achieve this.
If I try a simple aggs using term, it returns 2 for John, instead of 3.
Example request I'm trying:
{
"size": 0,
"aggs": {
"name": {
"terms": {
"field": "inner.name"
}
}
}
}
How can I possibly achieve this?
Additional Info: It will be used on Kibana later.
I can change mapping to whatever I want, but AFAIK Kibana doesn't like the "Nested" type. :(
You need to do a value_count aggregation, by default terms only does a doc_count, but the value_count aggregation will count the number of times a given field exists.
So, for your purposes:
{
"size": 0,
"aggs": {
"name": {
"terms": {
"field": "inner.name"
},
"aggs": {
"total": {
"value_count": {
"field": "inner.name"
}
}
}
}
}
}
Which returns:
"aggregations" : {
"name" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "John",
"doc_count" : 2,
"total" : {
"value" : 3
}
},
{
"key" : "James",
"doc_count" : 1,
"total" : {
"value" : 2
}
}
]
}
}

Sub-aggregate a multi-level nested composite aggregation

I'm trying to set up a search query that should composite aggregate a collection by a multi-level nested field and give me some sub-aggregation metrics from this collection. I was able to fetch the composite aggregation with its buckets as expected but the sub-aggregation metrics come with 0 for all buckets. I'm not sure if I am failing to correctly point out what fields the sub-aggregation should consider or if it should be placed inside a different part of the query.
My collection looks similar to the following:
{
id: '32ead132eq13w21',
statistics: {
clicks: 123,
views: 456
},
categories: [{ //nested type
name: 'color',
tags: [{ //nested type
slug: 'blue'
},{
slug: 'red'
}]
}]
}
Bellow you can find what I have tried so far. All buckets come with clicks sum as 0 even though all documents have a set clicks value.
GET /acounts-123321/_search
{
"size": 0,
"aggs": {
"nested_categories": {
"nested": {
"path": "categories"
},
"aggs": {
"nested_tags": {
"nested": {
"path": "categories.tags"
},
"aggs": {
"group": {
"composite": {
"size": 100,
"sources": [
{ "slug": { "terms" : { "field": "categories.tags.slug"} }}
]
},
"aggregations": {
"clicks": {
"sum": {
"field": "statistics.clicks"
}
}
}
}
}
}
}
}
}
}
The response body I have so far:
{
"took" : 6,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1304,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"nested_categories" : {
"doc_count" : 1486,
"nested_tags" : {
"doc_count" : 1486,
"group" : {
"buckets" : [
{
"key" : {
"slug" : "red"
},
"doc_count" : 268,
"clicks" : {
"value" : 0.0
}
}, {
"key" : {
"slug" : "blue"
},
"doc_count" : 122,
"clicks" : {
"value" : 0.0
},
.....
]
}
}
}
}
}
In order for this to work, all sources in the composite aggregation would need to be under the same nested context.
I've answered something similar a while ago. The asker needed to put the nested values onto the top level. You have the opposite challenge -- given that the stats.clicks field is on the top level, you'd need to duplicate it across each entry of the categories.tags which, I suspect, won't be feasible because you're likely updating these stats every now and then…
If you're OK with skipping the composite approach and using the terms agg without it, you could make the summation work by jumping back to the top level thru reverse_nested:
{
"size": 0,
"aggs": {
"nested_tags": {
"nested": {
"path": "categories.tags"
},
"aggs": {
"by_slug": {
"terms": {
"field": "categories.tags.slug",
"size": 100
},
"aggs": {
"back_to_parent": {
"reverse_nested": {},
"aggs": {
"clicks": {
"sum": {
"field": "statistics.clicks"
}
}
}
}
}
}
}
}
}
}
This'll work just as fine but won't offer pagination.
Clarification
If you needed a color filter, you could do:
{
"size": 0,
"aggs": {
"categories_parent": {
"nested": {
"path": "categories"
},
"aggs": {
"filtered_by_color": {
"filter": {
"term": {
"categories.name": "color"
}
},
"aggs": {
"nested_tags": {
"nested": {
"path": "categories.tags"
},
"aggs": {
"by_slug": {
"terms": {
"field": "categories.tags.slug",
"size": 100
},
"aggs": {
"back_to_parent": {
"reverse_nested": {},
"aggs": {
"clicks": {
"sum": {
"field": "statistics.clicks"
}
}
}
}
}
}
}
}
}
}
}
}
}
}

How to count number of fields inside nested field? - Elasticsearch

I did the following mapping. I would like to count the number of products in each nested field "products" (for each document separately). I would also like to do a histogram aggregation, so that I would know the number of specific bucket sizes.
PUT /receipts
{
"mappings": {
"properties": {
"id" : {
"type": "integer"
},
"user_id" : {
"type": "integer"
},
"date" : {
"type": "date"
},
"sum" : {
"type": "double"
},
"products" : {
"type": "nested",
"properties": {
"name" : {
"type" : "text"
},
"number" : {
"type" : "double"
},
"price_single" : {
"type" : "double"
},
"price_total" : {
"type" : "double"
}
}
}
}
}
}
I've tried this query, but I get the number of all the products instead of number of products for each document separately.
GET /receipts/_search
{
"query": {
"match_all": {}
},
"size": 0,
"aggs": {
"terms": {
"nested": {
"path": "products"
},
"aggs": {
"bucket_size": {
"value_count": {
"field": "products"
}
}
}
}
}
}
Result of the query:
"aggregations" : {
"terms" : {
"doc_count" : 6552,
"bucket_size" : {
"value" : 0
}
}
}
UPDATE
Now I have this code where I make separate buckets for each id and count the number of products inside them.
GET /receipts/_search
{
"query": {
"match_all": {}
},
"size" : 0,
"aggs": {
"terms":{
"terms":{
"field": "_id"
},
"aggs": {
"nested": {
"nested": {
"path": "products"
},
"aggs": {
"bucket_size": {
"value_count": {
"field": "products.number"
}
}
}
}
}
}
}
}
Result of the query:
"aggregations" : {
"terms" : {
"doc_count_error_upper_bound" : 5,
"sum_other_doc_count" : 490,
"buckets" : [
{
"key" : "1",
"doc_count" : 1,
"nested" : {
"doc_count" : 21,
"bucket_size" : {
"value" : 21
}
}
},
{
"key" : "10",
"doc_count" : 1,
"nested" : {
"doc_count" : 5,
"bucket_size" : {
"value" : 5
}
}
},
{
"key" : "100",
"doc_count" : 1,
"nested" : {
"doc_count" : 12,
"bucket_size" : {
"value" : 12
}
}
},
...
Is is possible to group these values (21, 5, 12, ...) into buckets to make a histogram of them?
products is only the path to the array of individual products, not an aggregatable field. So you'll need to use it on one of your product's field -- such as the number:
GET receipts/_search
{
"size": 0,
"aggs": {
"terms": {
"nested": {
"path": "products"
},
"aggs": {
"bucket_size": {
"value_count": {
"field": "products.number"
}
}
}
}
}
}
Note that is a product has no number, it'll not contribute to the total count. It's therefore best practice to always include an ID in each of them and then aggregate on that field.
Alternatively you could use a script to account for missing values. Luckily value_count does not deduplicate -- meaning if two products are alike and/or have empty values, they'll still be counted as two:
GET receipts/_search
{
"size": 0,
"aggs": {
"terms": {
"nested": {
"path": "products"
},
"aggs": {
"bucket_size": {
"value_count": {
"script": {
"source": "doc['products.number'].toString()"
}
}
}
}
}
}
}
UPDATE
You could also use a nested composite aggregation which'll give you the histogrammed product count w/ the corresponding receipt id:
GET /receipts/_search
{
"size": 0,
"aggs": {
"my_aggs": {
"nested": {
"path": "products"
},
"aggs": {
"composite_parent": {
"composite": {
"sources": [
{
"receipt_id": {
"terms": {
"field": "_id"
}
}
},
{
"product_number": {
"histogram": {
"field": "products.number",
"interval": 1
}
}
}
]
}
}
}
}
}
}
The interval is modifiable.

Return top N buckets only

So in elastic search I can do something like this:
{
"aggs": {
"title": {
"terms": {
"field": "title",
"shard_size": 50,
"size": 5
}
}
},
"query": {...},
"size": 0
}
And this will return me the document counts of the top 5 titles, so we end up with something like (in part):
"buckets" : [
{
"key" : "Delivery Driver",
"doc_count" : 1495
},
{
"key" : "Assistant Manager",
"doc_count" : 1250
},
{
"key" : "Server",
"doc_count" : 1175
},
{
"key" : "Dishwasher",
"doc_count" : 966
},
{
"key" : "Team Member",
"doc_count" : 960
}
]
But now I need to have the document counts in some custom buckets, so I do something like this:
{
"aggs": {
"loc": {
"filters": {
"filters": {
"1042_2": {
"terms": {
"counties": [
...
]
}
},
"1594_2": {
"terms": {
"counties": [
...
]
}
},
"1714_2": {
"terms": {
"counties": [
...
]
}
},
"1746_2": {
"terms": {
"counties": [
...
]
}
},
"1814_2": {
"terms": {
"counties": [
...
]
}
},
"1943_2": {
"terms": {
"counties": [
...
]
}
},
"2658_2": {
"terms": {
"counties": [
...
]
}
}
}
}
}
},
"query": {...},
"size": 0
}
Note that there are 7 buckets, because we don't know which are the largest. Running this will return us:
"buckets" : {
"1042_2" : {
"doc_count" : 23687
},
"1594_2" : {
"doc_count" : 8951
},
"1714_2" : {
"doc_count" : 52555
},
"1746_2" : {
"doc_count" : 60534
},
"1814_2" : {
"doc_count" : 63956
},
"1943_2" : {
"doc_count" : 25533
},
"2658_2" : {
"doc_count" : 534
}
}
But I would like it to only return me the largest 5 instead of all the buckets. Is there a way to restrict it to only the n largest buckets in the same way that the size parameter under terms did?
The size parameter does not make sense for filters aggregation, because by specifying the filters you already explicitly specify/control the number of buckets to get created and returned.
What you may want to consider though is, that you get all potential buckets created, but then get them sorted by descending count with an order-clause.
On client side then you simply "consume" the first n buckets.

Resources