I'm using Elastica's query builder to create queries for ElasticSearch (version 5.3)
I've around 1600 documents indexed in a particular index and type,
When I perform a search in that index with a empty string in query, I only get around 440 hits,
The generated query is:
{
"query": {
"bool": {
"should": [{
"multi_match": {
"query": "",
"fields": ["<field_1>^5", "<field_2>^4", "<field_3>^1", "<field_4>^2"],
"fuzziness": "AUTO"
}
}]
}
},
"from": 0,
"size": 20,
"aggs": {
"<agg_name_1>": {
"terms": {
"field": "<agg_field_1>"
}
},
"<agg_name_2>": {
"terms": {
"field": "<agg_field_2>"
}
},
"<agg_name_3>": {
"terms": {
"field": "<agg_field_3>"
}
},
"<agg_name_4>": {
"terms": {
"field": "<agg_field_4>"
}
},
"<agg_name_5>": {
"terms": {
"field": "<agg_field_5>"
}
},
"<date_agg_name>": {
"date_range": {
"field": "<agg_field_date_1>",
"keyed": true,
"ranges": [{
"from": "now\/d",
"key": "NOW\/DAY TO *"
}, {
"from": "now-2d\/d",
"key": "NOW\/DAY-2DAY TO *"
}, {
"from": "now-7d\/d",
"key": "NOW\/DAY-7DAY TO *"
}, {
"from": "now-30d\/d",
"key": "NOW\/DAY-30DAY TO *"
}]
}
},
"<agg_name_integer>": {
"range": {
"field": "<agg_field_integer>",
"keyed": true,
"ranges": [{
"to": 1,
"key": "1"
}, {
"to": 2,
"key": "2"
}, {
"to": 3,
"key": "3"
}, {
"to": 4,
"key": "4"
}, {
"to": 5,
"key": "5"
}]
}
}
}
}
I thought that since the query is empty string, it should match all the documents, but why is it only matching a subset of the document? I also tried changing should with must, but there was no difference.
Is it because of multi_match ? or fuzziness ? or fields ?
P. S. The actual name of fields are changed and replaced with placeholder.
If the query field is empty you can use match_all query
https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-all-query.html
multi match does not work with empty keywords as far as i have noticed.
Related
I have a query like that:
https://pastebin.com/9YK6WxEJ
this gives me:
https://pastebin.com/ranpCnzG
Now, the buckets are fine but I want to get the documents' data grouped by bucket name, not just their count in doc_count. Is there any way to do that?
Maybe this works for you?
"aggs": {
"rating_ranges": {
"range": {
"field": "AggregateRating",
"keyed": true,
"ranges": [
{
"key": "bad",
"to": 3
},
{
"key": "average",
"from": 3,
"to": 4
},
{
"key": "good",
"from": 4
}
]
},
"aggs": {
"hits": {
"top_hits": {
"size": 100,
"sort": [
{
"AggregateRating": {
"order": "desc"
}
}
]
}
}
}
}
}
I have an aggregation query where I am trying to calculate the max standard deviation of the number of destination ips per IP Address for a certain time range. As everyone knows the common problem with the moving function std_dev aggregation function, the first 2 days' std dev values will always be null and 0 respectively due to no data being taken into account previously.
Here is my aggregation query:
{
"size": 0,
"query": {
"bool": {
"must": [
{
"exists": {
"field": "aggregations.range.buckets.by ip.buckets.by date.buckets.max_dest_ips.value"
}
}
]
}
},
"aggs": {
"range": {
"date_range": {
"field": "Source Time",
"ranges": [
{
"from": "2018-04-25",
"to": "2018-05-02"
}
]
},
"aggs": {
"by ip": {
"terms": {
"field": "IP Address.keyword",
"size": 500
},
"aggs": {
"datehisto": {
"date_histogram": {
"field": "Source Time",
"interval": "day"
},
"aggs": {
"max_dest_ips": {
"sum": {
"field": "aggregations.range.buckets.by ip.buckets.by date.buckets.max_dest_ips.value"
}
},
"max_dest_ips_std_dev": {
"moving_fn": {
"buckets_path": "max_dest_ips",
"window": 3,
"script": "MovingFunctions.stdDev(values, MovingFunctions.unweightedAvg(values))"
}
}
}
}
}
}
}
}
},
"post_filter": {
"range": {
"Source Time": {
"gte": "2018-05-01"
}
}
}
}
Here is a snippet of the response:
{
"key": "192.168.0.1",
"doc_count": 6,
"datehisto": {
"buckets": [
{
"key_as_string": "2018-04-25T00:00:00.000Z",
"key": 1524614400000,
"doc_count": 1,
"max_dest_ips": {
"value": 309
},
"max_dest_ips_std_dev": {
"value": null
}
},
{
"key_as_string": "2018-04-26T00:00:00.000Z",
"key": 1524700800000,
"doc_count": 1,
"max_dest_ips": {
"value": 529
},
"max_dest_ips_std_dev": {
"value": 0
}
},
{
"key_as_string": "2018-04-27T00:00:00.000Z",
"key": 1524787200000,
"doc_count": 1,
"max_dest_ips": {
"value": 408
},
"max_dest_ips_std_dev": {
"value": 110
}
},
{
"key_as_string": "2018-04-28T00:00:00.000Z",
"key": 1524873600000,
"doc_count": 1,
"max_dest_ips": {
"value": 187
},
"max_dest_ips_std_dev": {
"value": 89.96419040682551
}
}
]
}
}
What I want is for the first 2 days' bucket data (25th and 26th) to be filtered and removed from the above bucket results. I have tried the post filter above and the normal query filter below:
"filter": {
"range": {
"Source Time": {
"gte": "2018-04-27"
}
}
}
The Post Filter does nothing and doesn't work. The above filter range query makes the buckets start from the 27th but also makes the standard deviation calculations start on 27th as well (resulting in 27th being null and 28th being 0) when I want it to start from the 25th instead.
Any other alternative solutions? Help is greatly appreciated!
I have a list of products (deal entities) and I'm attempting to create a bucket aggregation by categories, ordered by the sum of available_stock.
This all works fine, but I want to exclude such categories from the resulting aggregation that don't have level set to 1 (In other words, I only want to keep aggregations on category where level IS 1).
I am aware that elasticsearch provides "exclude" and "include" parameters, but these only work on the same field I'm aggregating on (deal.category.id in this case)
This is my sample deal document:
{
"_source": {
"id": 392745,
"category": [
{
"id": 17575,
"level": 2
},
{
"id": 17574,
"level": 1
},
{
"id": 17572,
"level": 0
}
],
"stats": {
"available_stock": 500
}
}
}
And this would be the query:
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
}
},
"aggs": {
"mainAggregation": {
"terms": {
"field": "deal.category.id",
"order": {
"available_stock": "desc"
},
"size": 3
},
"aggs": {
"available_stock": {
"sum": {
"field": "deal.stats.available_stock"
}
}
}
}
},
"size": 0
}
And my resulting aggregation, sadly including category 17572 with level 0.
{
"aggregations": {
"mainAggregation": {
"buckets": [
{
"key": 17572,
"doc_count": 30,
"available_stock": {
"value": 24000
}
},
{
"key": 17598,
"doc_count": 10,
"available_stock": {
"value": 12000
}
},
{
"key": 17602,
"doc_count": 8,
"available_stock": {
"value": 6000
}
}
]
}
}
}
P.S.: Currently on ElasticSearch 1.6
Update 1: Still stuck on the problem after various experiments with various combimation of subaggregations.
I have found this impossible to solve and decided to go with two separate queries.
I came across a confusion in elasticsearch (version : 1.7.1). As per documentation https://www.elastic.co/guide/en/elasticsearch/guide/current/_filtering_queries_and_aggregations.html ,a filter applied to the query will also be applied to aggregation. When I issued the following query, I am getting unexpected results.
{
"aggregations": {
"outer": {
"aggregations": {
"inner": {
"date_histogram": {
"extended_bounds": {
"min": 0
},
"field": "time",
"interval": "30d",
"min_doc_count": 0,
"order": {
"_key": "desc"
}
}
}
},
"terms": {
"field": "ad_id",
"size": 10
}
}
},
"query": {
"filtered": {
"filter": {
"and": {
"filters": [
{
"range": {
"time": {
"from": 1441619173000,
"include_lower": false,
"include_upper": true,
"to": 1442835370000
}
}
}
]
}
}
}
}
}
A portion of result is here.
{
"buckets": [
{
"key": 203737,
"doc_count": 27,
"inner": {
"buckets": [
{
"key_as_string": "2015-09-02T00:00:00.000Z",
"key": 1441152000000,
"doc_count": 27
},
{
"key_as_string": "1970-01-31T00:00:00.000Z",
"key": 2592000000,
"doc_count": 0
},
...
{
"key_as_string": "1970-01-01T00:00:00.000Z",
"key": 0,
"doc_count": 0
}
]
}
}
]
}
Please note that the aggregation result includes keys outside the range I have applied. Type of the time field is date. I have also tried the following query, but the result was same.
{
"aggs": {
"outer_filter": {
"filter": {
"and": {
"filters": [
{
"range": {
"time": {
"from": 1441619173000,
"include_lower": false,
"include_upper": true,
"to": 1442835370000
}
}
}
]
}
},
"aggs": {
"outer_term": {
"terms": {
"field": "ad_id",
"size": 10
},
"aggs": {
"inner": {
"date_histogram": {
"extended_bounds": {
"min": 0
},
"field": "time",
"interval": "30d",
"min_doc_count": 0,
"order": {
"_key": "desc"
}
}
}
}
}
}
}
}
}
My problem is that the aggregation result includes results outside the filters ("from": 1441619173000,"to": 1442835370000).
Why are the filters not getting applied ?
Can anyone help please.
'extended_bound' min value is the problem. As min is 0 and the field is of type date, buckets starts from 1970 itself.
You appear to have the range filter confused with the range aggregation.
The range filter takes two types of parameters, gte or gt (greater than) and lte or lt (less than).
The from/to parameters are for the range aggregation, which is used to split your results into user defined buckets.
Is there a way to simplify and optimize the following query:
{
"query": {
"filtered": {
"filter": {
"and": [
{
"range": {
"ts": {
"gte": "2014-12-18",
"lte": "2014-12-18"
}
}
}
]
},
"query": {
"match": {
"track_events.event": "render"
}
}
}
},
"aggs": {
"per_type": {
"terms": {
"field": "type",
"order": {
"_count": "desc"
},
"size": 0
},
"aggs": {
"per_hour": {
"terms": {
"script": "(doc[\"track_events.ts\"].value - doc[\"ts\"].value)/(1000 * 3600)",
"order": {
"_count": "desc"
},
"size": 0
}
}
}
}
}
}
The index in elasticsearch contains documents with fields track_events.ts and ts. The purpose is to count how many occurances exist in the hourly intervals between track_events.ts and ts.
Example response:
"buckets": [{
"key": "0",
"doc_count": 67736997
},
{
"key": "1",
"doc_count": 7193214
},
{
"key": "2",
"doc_count": 3406966
},
{
"key": "3",
"doc_count": 1988135
}]
}
which means that 67736997 counts where found that have time difference less than 1 hour, 7193214 counts with time difference less than 2 hours, etc.
The biggest performance gain would be to replace the script.
i.e. instead of doing:
(doc[\"track_events.ts\"].value - doc[\"ts\"].value)/(1000 * 3600)
pre-calculate this value when loading the data into Elasticsearch and put it into another field. Then do the term aggregation on this field instead.