Fetch the details of events occurred exactly x times in desired duration - elasticsearch

In ElasticSearch, I need to fetch the records only if the Event name occurred exactly x times in n days or a particular duration.
Sample index data is as below:
{"event":{"name":"event1"},"timestamp":"2010-06-20"}
I'm able to get the records of the minimum occurrence of desired event name in a particular duration. But instead of minimum, I want the exact matching count. Here's what I tried:
{
"_source": true,
"size": 0,
"query": {
"bool": {
"filter":
{
"range": { "timestamp": { "gte": "2010", "lte": "2016" }}
},
"must":
[
{ "match": { "event.name.keyword": "event1" }}
]
}
},
"aggs": {
"occurrence": {
"terms": {
"field": "event.name.keyword",
"min_doc_count": 5,
"size": 10
}
}
}
}
Another way to achieve the same is by using value_count. But here as well, I'm unable to add a condition to match exact occurrences.
{
"_source": true,
"size": 0,
"query": {
"bool": {
"filter":
{
"range": { "timestamp": { "gte": "2010", "lte": "2016" }}
},
"must":
[
{ "match": { "event.name.keyword": "event1" }}
]
}
},
"aggs": {
"occurrence": {
"value_count": {
"field": "event.name.keyword"
}
}
}
}
It provides the output as (Other output is removed for brevity):
"aggregations" : {
"occurrence" : {
"value" : 2
}
}
But I need to add a condition in the output of aggr (occurrence here) to exactly match the occurrence so that I can get the records only if the event occurred exactly x times.
Can some ES experts help me on this?

You can use Bucket Selector Aggregation and add condition as shown below for the count. Below query will give you only event which is occurs total 5 times. You can add a query clause for whatever filter you want to apply like date range or event name or anything else.
{
"size": 0,
"aggs": {
"count": {
"terms": {
"field": "event.name.keyword",
"size": 10
},
"aggs": {
"val_count": {
"value_count": {
"field": "event.name.keyword"
}
},
"selector": {
"bucket_selector": {
"buckets_path": {
"my_var1": "val_count"
},
"script": "params.my_var1 == 5"
}
}
}
}
}
}
You will get result something like below:
"aggregations" : {
"count" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "event1",
"doc_count" : 5,
"val_count" : {
"value" : 5
}
},
{
"key" : "event8",
"doc_count" : 5,
"val_count" : {
"value" : 5
}
}
]
}
}

Related

Finding intersection of two buckets using Elastic

I have data structured as the following in an elastic index:
[ { customer_id: 1, date_of_purchase: 01-01-2022 },
{ customer_id: 2, date_of_purchase: 01-02-2022 },
{ customer_id: 1, date_of_purchase: 01-02-2022 },
....
]
I want to find the numbers of users who have bought something in both September and October, but having issues figuring out how to make a query for this. Any suggestions would rock, thanks!
I have used following aggregations
1. Terms aggregation
2. Bucket selector
3. Date Range
In query I have filtered all documents which either have purchase date in Jan or in Feb. This reduces number of documents for aggregation to work on. In aggregation I have done a group by(terms aggregation) on customer_id and then further grouped documents based on date ranges(1 bucket for each month). Then I have eliminated months(using bucket selector) which have zero documents i.e. with no purchase date in that month and further eliminated customers which have 1 or zero buckets
Query
{
"query": {
"bool": {
"should": [
{
"range": {
"date_of_purchase": {
"gte": "2022-01-01",
"lte": "2022-01-31"
}
}
},
{
"range": {
"date_of_purchase": {
"gte": "2022-02-01",
"lte": "2022-02-28"
}
}
}
]
}
},
"aggs": {
"cutomers": {
"terms": {
"field": "customer_id",
"size": 10
},
"aggs": {
"range": {
"date_range": {
"field": "date_of_purchase",
"ranges": [
{
"to": "2022-01-31",
"from": "2022-01-01"
},
{
"to": "2022-02-28",
"from": "2022-02-01"
}
]
},
"aggs": {
"filter_months": {
"bucket_selector": {
"buckets_path": {
"doc_count":"_count"
},
"script": "params.doc_count>=1"
}
}
}
},
"bucket_count":{
"bucket_selector": {
"buckets_path": {
"bucket_count":"range._bucket_count"
},
"script": "params.bucket_count>1"
}
}
}
}
}
}
Results
"aggregations" : {
"cutomers" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : 1,
"doc_count" : 2,
"range" : {
"buckets" : [
{
"key" : "2022-01-01T00:00:00.000Z-2022-01-31T00:00:00.000Z",
"from" : 1.6409952E12,
"from_as_string" : "2022-01-01T00:00:00.000Z",
"to" : 1.6435872E12,
"to_as_string" : "2022-01-31T00:00:00.000Z",
"doc_count" : 1
},
{
"key" : "2022-02-01T00:00:00.000Z-2022-02-28T00:00:00.000Z",
"from" : 1.6436736E12,
"from_as_string" : "2022-02-01T00:00:00.000Z",
"to" : 1.6460064E12,
"to_as_string" : "2022-02-28T00:00:00.000Z",
"doc_count" : 1
}
]
}
}
]
}
}

Elastic script from buckets and higher level aggregation

I want to compare the daily average of a metric (the frequency of words appearing in texts) to the value of a specific day. This is during a week. My goal is to check whether there's a spike. If the last day is way higher than the daily average, I'd trigger an alarm.
So from my input in Elasticsearch I compute the daily average during the week and find out the value for the last day of that week.
For getting the daily average for the week, I simply cut a week's worth of data using a range query on date field, so all my available data is the given week. I compute the sum and divide by 7 for a daily average.
For getting the last day's value, I did a terms aggregation on the date field with descending order and size 1 as suggested in a different question (How to select the last bucket in a date_histogram selector in Elasticsearch)
The whole output is as follows. Here you can see words "rama0" and "rama1" with their corresponding frequencies.
{
"aggregations" : {
"the_keywords" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "rama0",
"doc_count" : 4200,
"the_last_day" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 3600,
"buckets" : [
{
"key" : 1580169600000,
"key_as_string" : "2020-01-28T00:00:00.000Z",
"doc_count" : 600,
"the_last_day_frequency" : {
"value" : 3000.0
}
}
]
},
"the_weekly_sum" : {
"value" : 21000.0
},
"the_daily_average" : {
"value" : 3000.0
}
},
{
"key" : "rama1",
"doc_count" : 4200,
"the_last_day" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 3600,
"buckets" : [
{
"key" : 1580169600000,
"key_as_string" : "2020-01-28T00:00:00.000Z",
"doc_count" : 600,
"the_last_day_frequency" : {
"value" : 3000.0
}
}
]
},
"the_weekly_sum" : {
"value" : 21000.0
},
"the_daily_average" : {
"value" : 3000.0
}
},
[...]
]
}
}
}
Now I have the_daily_average in a high level of the output, and the_last_day_frequency in the single-element buckets list in the_last_day aggregation. I cannot use a bucket_script to compare those, because I cannot refer to a single bucket (if I place the script outside the_last_day aggregation) and I cannot refer to higher-level aggregations if I place the script inside the_last_day.
IMO the reasonable thing to do would be to put the script outside the aggregation and use a buckets_path using the <AGG_NAME><MULTIBUCKET_KEY> syntax mentioned in the docs, but I have tried "var1": "the_last_day[1580169600000]>the_last_day_frequency" and variations (hardcoding first until it works), but I haven't been able to refer to a particular bucket.
My ultimate goal is to have a list of keywords for which the last day frequency greatly exceeds the daily average.
For anyone interested, my current query is as follows. Notice that the part I'm struggling with is commented out.
body='{
"query": {
"range": {
"date": {
"gte": "START",
"lte": "END"
}
}
},
"aggs": {
"the_keywords": {
"terms": {
"field": "keyword",
"size": 100
},
"aggs": {
"the_weekly_sum": {
"sum": {
"field": "frequency"
}
},
"the_daily_average" : {
"bucket_script": {
"buckets_path": {
"weekly_sum": "the_weekly_sum"
},
"script": {
"inline": "return params.weekly_sum / 7"
}
}
},
"the_last_day": {
"terms": {
"field": "date",
"size": 1,
"order": {"_key": "desc"}
},
"aggs": {
"the_last_day_frequency": {
"sum": {
"field": "frequency"
}
}
}
}/*,
"the_spike": {
"bucket_script": {
"buckets_path": {
"last_day_frequency": "the_last_day>the_last_day_frequency",
"daily_average": "the_daily_average"
},
"script": {
"inline": "return last_day_frequency / daily_average"
}
}
}*/
}
}
}
}'
In your query the_last_day>the_last_day_frequency points to a bucket not a single value so it is throwing error. You need to get single metric value from "the_last_day_frequency", you can achieve it using max_bucket. Then you can use bucket_Selector aggregation to compare last day value with average value
Query:
"aggs": {
"the_keywords": {
"terms": {
"field": "keyword",
"size": 100
},
"aggs": {
"the_weekly_sum": {
"sum": {
"field": "frequency"
}
},
"the_daily_average": {
"bucket_script": {
"buckets_path": {
"weekly_sum": "the_weekly_sum"
},
"script": {
"inline": "return params.weekly_sum / 7"
}
}
},
"the_last_day": {
"terms": {
"field": "date",
"size": 1,
"order": {
"_key": "desc"
}
},
"aggs": {
"the_last_day_frequency": {
"sum": {
"field": "frequency"
}
}
}
},
"max_frequency_last_day": {
"max_bucket": {
"buckets_path": "the_last_day>the_last_day_frequency"
}
},
"the_spike": {
"bucket_selector": {
"buckets_path": {
"last_day_frequency": "max_frequency_last_day",
"daily_average": "the_daily_average"
},
"script": {
"inline": "params.last_day_frequency > params.daily_average"
}
}
}
}
}
}
````

Aggregation performance issue in Elasticsearch with hotel availability data

I am building a small app to find hotel room availability like booking.com using Elasticsearch 6.8.0.
Basically, I have a document per day and room, that specifies if it is available and the rate for that day. I need to run a query with this requirements:
Input:
The days of the desired staying.
The max amount of money I am willing to spend.
The page of the results I want to see.
The number of results per page.
Output:
List of cheapest offer per hotel that fulfill the requirements, ordered in ASC order.
Documents schema:
{
"mappings": {
"_doc": {
"properties": {
"room_id": {
"type": "keyword"
},
"available": {
"type": "boolean"
},
"rate": {
"type": "float"
},
"hotel_id": {
"type": "keyword"
},
"day": {
"type": "date",
"format": "yyyyMMdd"
}
}
}
}
}
I have an index per month, and at the moment I only search within the same month.
I came up with this query:
GET /hotels_201910/_search?filter_path=aggregations.hotel.buckets.min_price.value,aggregations.hotel.buckets.key
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"day": { "gte" : "20191001", "lte" : "20191010" }
}
},
{
"term": {
"available": true
}
}
]
}
},
"aggs": {
"hotel": {
"terms": {
"field": "hotel_id",
"min_doc_count": 1,
"size" : 1000000
},
"aggs": {
"room": {
"terms": {
"field": "room_id",
"min_doc_count": 10,
"size" : 1000000
},
"aggs": {
"sum_price": {
"sum": {
"field": "rate"
}
},
"max_price": {
"bucket_selector": {
"buckets_path": {
"price": "sum_price"
},
"script": "params.price <= 600"
}
}
}
},
"min_price": {
"min_bucket": {
"buckets_path": "room>sum_price"
}
},
"sort_by_min_price" : {
"bucket_sort" :{
"sort": [{"min_price" : { "order" : "asc" }}],
"from" : 0,
"size" : 20
}
}
}
}
}
}
And it works, but have several issues.
It is too slow. With 100K daily rooms, it takes about 500 ms to return on my computer, where no other query is running. So in a live system it would be very bad.
I need to setup the "size" to a big number in the terms aggregation, otherwise not all hotels and rooms are considered.
Is there a way to improve the performance of this aggregation? I have tried to split the index in multiple shards, but it did not help.
I am almost sure that the approach is wrong, and that is why is slow. Any recommendation about how to achieve a faster query response time in this case?
Before going to the answer, I didnt understand why you are using the below condition/aggregation
"min_price": {
"min_bucket": {
"buckets_path": "room>sum_price"
}
}
Can you give me more clarification on why you need this.
Now, the answer your main question:
Why do you want to term by room_id as well with hotel_id. You can get all the rooms of your search and then group them by hotel_id on application side.
The below logic, will get you all docs grouped by room_id and with sum metrics. You can use the same script filter for > 600 condition.
{
"size": 0,
"query": {
"bool": {
"filter": [
{
"range": {
"day": { "gte" : "20191001", "lte" : "20191010" }
}
},
{
"term": {
"available": true
}
}
]
}
},
"by_room_id": {
"composite" : {
"size": 100,
"sources" : [
{
"room_id": {
"terms" : {
"field": "room_id"
}
}
}
]
},
"aggregations": {
"price_on_required_dates": {
"sum": { "field": "rate" }
},
"include_source": {
"top_hits": {
"size": 1,
"_source": true
}
},
"price_bucket_sort": {
"bucket_sort": {
"sort": [
{"price_on_required_dates": {"order": "desc"}}
]
}
}
}
}
}
Also, to improve search performance,
https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-search-speed.html

Elastic search find difference in a field using range query

I have to find out how many KWH has been run between two given time. For now, I am having 2 queries to find out last and the first record between the time using asc and desc sorting and doing subtraction to get the KWH value between the time is there any other way to get the KWH without 2 queries
Range query:
"query": {
"bool": {
"must": [
{
"range": {
"createdtime": {
"gte": "1566757800000",
"lte": "1566844199000",
"boost": 2.0
}
}
},
{
"match": {
"meter_id": 101
}
}
]
}
},
"size" : 1,
"from": 0,
"sort": { "createdtime" : {"order" : "desc"} }
}
another query is almost same except the order is asc
So both the 2 queries will return the record, and I am doing the subtractions in the result set to find out the differences.
You could run one query only and use top_hits aggregation to extract the "first" and "last" value, but it won't calculate the difference. You'd still have to do it outside Elasticsearch.
{
"size": 0,
"query": {
"bool": {
"must": [
{
"range": {
"createdtime": {
"gte": "1566757800000",
"lte": "1566844199000",
"boost": 2.0
}
}
},
{
"match": {
"meter_id": 101
}
}
]
}
},
"aggs": {
"range": {
"filter": {
"range": {
"createddate": {
"gte": "2016-08-19T10:00:00",
"lte": "2016-08-23T10:00:00"
}
}
},
"aggs": {
"min": {
"top_hits": {
"sort": [{"createddate": {"order": "asc"}}],
"_source": {"includes": [ "kwh_value" ]},
"size" : 1
}
},
"max": {
"top_hits": {
"sort": [{"createddate": {"order": "desc"}}],
"_source": {"includes": [ "kwh_value" ]},
"size" : 1
}
}
}
}
}
}

Elasticsearch: aggregation and select docs only having max value of field

I am using elastic search 6.5.
Basically, based on my query my index can return multiple documents, I need only those documents which has the max value for a particular field.
E.g.
{
"query": {
"bool": {
"must": [
{
"match": { "header.date" : "2019-07-02" }
},
{
"match": { "header.field" : "ABC" }
},
{
"bool": {
"should": [
{
"regexp": { "body.meta.field": "myregex1" }
},
{
"regexp": { "body.meta.field": "myregex2" }
}
]
}
}
]
}
},
"size" : 10000
}
The above query will return lots of documents/messages as per the query. The sample data returned is:
"header" : {
"id" : "Text_20190702101200123_111",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex1",
"timestamp": "2019-07-02T10:12:00.123Z",
}
}
-----------------
"header" : {
"id" : "Text_20190702151200123_121",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex2",
"timestamp": "2019-07-02T15:12:00.123Z",
}
}
-----------------
"header" : {
"id" : "Text_20190702081200133_124",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex1",
"timestamp": "2019-07-02T08:12:00.133Z",
}
}
So based on the above 3 documents, I only want the max timestamp one to be shown i.e. "timestamp": "2019-07-02T15:12:00.123Z"
I only want one document in above example.
I tried doing it as below:
{
"query": {
"bool": {
"must": [
{
"match": { "header.date" : "2019-07-02" }
},
{
"match": { "header.field" : "ABC" }
},
{
"bool": {
"should": [
{
"regexp": { "body.meta.field": "myregex1" }
},
{
"regexp": { "body.meta.field": "myregex2" }
}
]
}
}
]
}
},
"aggs": {
"group": {
"terms": {
"field": "header.id",
"order": { "group_docs" : "desc" }
},
"aggs" : {
"group_docs": { "max" : { "field": "body.meta.tiemstamp" } }
}
}
},
"size": "10000"
}
Executing the above, I am still getting all the 3 documents, instead of only one.
I do get the buckets though, but I need only one of them and not all the buckets.
The output in addition to all the records,
"aggregations": {
"group": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Text_20190702151200123_121",
"doc_count": 29,
"group_docs": {
"value": 1564551683867,
"value_as_string": "2019-07-02T15:12:00.123Z"
}
},
{
"key": "Text_20190702101200123_111",
"doc_count": 29,
"group_docs": {
"value": 1564551633912,
"value_as_string": "2019-07-02T10:12:00.123Z"
}
},
{
"key": "Text_20190702081200133_124",
"doc_count": 29,
"group_docs": {
"value": 1564510566971,
"value_as_string": "2019-07-02T08:12:00.133Z"
}
}
]
}
}
What am I missing here?
Please note that I can have more than one messages for same timestamp. So I want them all i.e. all the messages/documents belonging to the max time stamp.
In above example there are 29 messages for same timestamp (It can go to any number). So there are 29 * 3 messages being retrieved by my query after using the above aggregation.
Basically I am able to group correctly, I am looking for something like HAVING in SQl?

Resources