To get hits inside aggregations,in elasticsearch - elasticsearch

I have a date field inside my data. I did a date histogram aggregation on it,with interval set as month. Now it returns,the number of documents per month,interval.
Here is the query I used:
{
"aggs": {
"dateHistogram": {
"date_histogram": {
"field": "currentDate",
"interval": "day"
}
}
}
}
Below the exact response I have received.
{
"aggregations": {
"dateHistogram": {
"buckets": [{
"key_as_string": "2015-05-06",
"key": 1430870400000,
"doc_count": 10
}, {
"key_as_string": "2015-04-06",
"key": 1430870500000,
"doc_count": 14
}]
}
}
}
From the above response it is clear that,there are 10 documents under the key "1430870400000" and 14 documents under the key "1430870500000". But despite from the document count,the individual documents are not shown. I want them to be shown in the response,so that I can take values out from it. How do I achieve this in elasticsearch?

The easy method for this is using the "top-hits" aggregation. You can find the usage of "top-hits" here
Top-hits aggregation will give you the relevant data inside the aggregation you have done and also there are options to specify from which result you want to fetch,and the size of the data you want to be taken and also sort options.

As per my understanding you want to fetch all documents and used that documents for aggregations so you should use match query with aggregation as below :
{
"query": {
"bool": {
"must": [
{
"match_all": {}
}
]
}
},
"aggs": {
"date_wise_logs_counts": {
"date_histogram": {
"field": "currentDate",
"interval": "day"
}
}
}
}
Above return default 10 documents in hit array, use size size=BIGNUMBER to get more than 10 items. (where BIGNUMBER equals a number you believe is bigger than your dataset). But you should use scan and scroll instead of size

Related

ElasticSearch - How can i apply a filter over the results of the query to limit the document that have a certain value

I have a question regarding elastic search but I am not sure where to start searching or which precise operation I should search for using google.
Let say I have a document with data and one of its fields is "the_best" (which is a boolean). The thing is (currently), over 48 results (given by a working query), I have like 15 documents returned with the_best field set to true.
Now, I would like to limit this by only 2 maximum documents set to true over the results. So now, it (elasticsearch) should now return 35 results (if we stik at the story above):
Base (out of 48results): [15 the_best=true, 33 the_best=false]
Expected (with max 2 the_best=true): I should get 35 results [2 the_best=true, 33 the_best=false])
Any idea? :)
One way to do is using m_search
using m_Search you can combine multiple queries
GET <index>/_msearch
{}
{"query":{"term":{"the_best":true}},"from":0,"size":2}
{}
{"query":{"term":{"the_best":false}},"from":0,"size":15}
If you want to do it in single search aggregation can be used(will be less performant)
I have used filter aggregation and top_hits aggregation
{
"size": 0,
"aggs": {
"true": {
"filter": {
"term": {
"the_best": true
}
},
"aggs": {
"docs": {
"top_hits": {
"size": 2
}
}
}
},
"false": {
"filter": {
"term": {
"the_best": false
}
},
"aggs": {
"docs": {
"top_hits": {
"size": 10
}
}
}
}
}
}

How to get total number of aggregation buckets in Elasticsearch?

I use Elasticsearch terms aggregation to see how many documents have a certain value in their "foo" field like this:
{
...
"aggregations": {
"metastore": {
"terms": {
"field": "foo",
"size": 50
}
}
}
}
and I get the response:
"aggregations": {
"foo": {
"buckets": [
{
"key_as_string": "2018-10-01T00:00:00.000Z",
"key": 1538352000000,
"doc_count": 935
},
{
"key_as_string": "2018-11-01T00:00:00.000Z",
"key": 1541030400000,
"doc_count": 15839
},
...
/* 48 more values */
]
}
}
But I'm limiting the number of different values to 50. If there are more different values in this field they won't be returned in the response, and that's fine, because I don't need to all of them, but I would like to know how many of them there are. So, how could I get the total number of different values? It would be fantastic if the answer provided a full example query, thanks.
You can probably add a cardinality aggregation which will give you unique number of terms for the field. This will be equal to the number of buckets for the terms aggregation.
{
...
"aggregations": {
"metastore": {
"terms": {
"field": "foo",
"size": 50
}
},
"uniquefoo": {
"cardinality": {
"field": "foo"
}
}
}
}
NOTE: Please keep in mind that cardinality aggregation might in some cases return approx count. To know more on it read here.
The cardinality aggregation is there to help. Just note, however, that the number that is returned is an approximation and might not reflect the exact number of buckets you'd get if you were to request them all. However, the accuracy is pretty good on low cardinality fields.
{
...
"aggregations": {
"unique_count": {
"cardinality": {
"field": "foo"
}
},
"metastore": {
"terms": {
"field": "foo",
"size": 50
}
}
}
}

Paginate an aggregation sorted by hits on Elastic index

I have an Elastic index (say file) where I append a document every time the file is downloaded by a client. Each document is quite basic, it contains a field filename and a date when to indicate the time of the download.
What I want to achieve is to get, for each file the number of times it has been downloaded in the last 3 months. Thanks to another question, I have a query that returns all the results:
{
"query": {
"range": {
"when": {
"gte": "now-3M"
}
}
},
"aggs": {
"downloads": {
"terms": {
"field": "filename.keyword",
"size": 1000
}
}
},
"size": 0
}
Now, I want to have a paginated result. The term aggreation cannot be paginated, so I use a composite aggregation. Of course, if there is a better aggregation, it can be used here...
So for the moment, I have something like that:
{
"query": {
"range": {
"when": {
"gte": "now-3M"
}
}
},
"aggs": {
"downloads_agg": {
"composite": {
"size": 100,
"sources": [
{
"downloads": {
"terms": {
"field": "filename.keyword"
}
}
}
]
}
}
},
"size": 0
}
This aggregation allows me to paginate (thanks to after_key value in response), but it is not sorted by the number of downloads - it is sorted by the filename.
How can I sort that composite aggregation on the number of documents for each filename in my index?
Thanks.
Composite aggregation don't allow sorting based on the value field.
Excerpt from the discussion on elastic forum:
it's designed as a memory-friendly way to paginate over aggregations.
Part of the tradeoff is that you lose things like ordering by doc
count, since that isn't known until after all the docs have been
collected.
I have no experience with Transforms (part of X-pack & Licensed) but you can try that out. Apart from this, I don't see a way to get the expected output.

Elasticsearch filter based on field similarity

For reference, I'm using Elasticsearch 6.4.0
I have a Elasticsearch query that returns a certain number of hits, and I'm trying to remove hits with text field values that are too similar. My query is:
{
"size": 10,
"collapse": {
"field": "author_id"
},
"query": {
"function_score": {
"boost_mode": "replace",
"score_mode": "avg",
"functions": [
{
//my custom query function
}
],
"query": {
"bool": {
"must_not": [
{
"term": {
"author_id": MY_ID
}
}
]
}
}
}
},
"aggs": {
"book_name_sample": {
"sampler": {
"shard_size": 10
},
"aggs": {
"frequent_words": {
"significant_text": {
"field": "book_name",
"filter_duplicate_text": true
}
}
}
}
}
}
This query uses a custom function score combined with a filter to return books a person might like (that they haven't authored). Thing is, for some people, it returns books with names that are very similar (i.e. The Life of George Washington, Good Times with George Washington, Who was George Washington), and I'd like the hits to have a more diverse set of names.
I'm using a bucket_selector to aggregate the hits based on text similarity, and the query gives me something like:
...,
"aggregations": {
"book_name_sample": {
"doc_count": 10,
"frequent_words": {
"doc_count": 10,
"bg_count": 482626,
"buckets": [
{
"key": "George",
"doc_count": 3,
"score": 17.278715785140975,
"bg_count": 9718
},
{
"key": "Washington",
"doc_count": 3,
"score": 15.312204414323656,
"bg_count": 10919
}
]
}
}
}
Is it possible to filter the returned documents based on this aggregation result within Elasticsearch? IE remove hits with book_name_sample doc_count less than X? I know I can do this in PHP or whatever language uses the hits, but I'd like to keep it within ES. I've tried using a bucket_selector aggregator like so:
"book_name_bucket_filter": {
"bucket_selector": {
"buckets_path": {
"freqWords": "frequent_words"
},
"script": "params.freqWords < 3"
}
}
But then I get an error: org.elasticsearch.search.aggregations.bucket.sampler.InternalSampler cannot be cast to org.elasticsearch.search.aggregations.InternalMultiBucketAggregation
Also, if that filter removes enough documents so that the hit count is less than the requested size, is it possible to tell ES to go fetch the next top scoring hits so that hits count is filled out?
Why not use top hits inside the aggregation to get relevant document that match the bucket? You can specify how many relevant top hits you want inside the top hits aggregation. So basically this will give you a certain number of documents for each bucket.

Is there a way to have elasticsearch return a hit per generated bucket during an aggregation?

right now I have a query like this:
{
"query": {
"bool": {
"must": [
{
"match": {
"uuid": "xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxxx"
}
},
{
"range": {
"date": {
"from": "now-12h",
"to": "now"
}
}
}
]
}
},
"aggs": {
"query": {
"terms": [
{
"field": "query",
"size": 3
}
]
}
}
}
The aggregation works perfectly well, but I can't seem to find a way to control the hit data that is returned, I can use the size parameter at the top of the dsl, but the hits that are returned are not returned in the same order as the bucket so the bucket results do not line up with the hit results. Is there any way to correct this or do I have to issue 2 separate queries?
To expand on Filipe's answer, it seems like the top_hits aggregation is what you are looking for, e.g.
{
"query": {
... snip ...
},
"aggs": {
"query": {
"terms": {
"field": "query",
"size": 3
},
"aggs": {
"top": {
"top_hits": {
"size": 42
}
}
}
}
}
}
Your query uses exact matches (match and range) and binary logic (must, bool) and thus should probably be converted to use filters instead:
"filtered": {
"filter": {
"bool": {
"must": [
{
"term": {
"uuid": "xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxxxxx"
}
},
{
"range": {
"date": {
"from": "now-12h",
"to": "now"
}
}
}
]
}
}
As for the aggregations,
The hits that are returned do not represent all the buckets that were returned. so if have buckets for terms 'a', 'b', and 'c' I want to have hits that represent those buckets as well
Perhaps you are looking to control the scope of the buckets? You can make an aggregation bucket global so that it will not be influenced by the query or filter.
Keep in mind that Elasticsearch will not "group" hits in any way -- it is always a flat list ordered according to score and additional sorting options.
Aggregations can be organized in a nested structure and return computed or extracted values, in a specific order. In the case of terms aggregation, it is in descending count (highest number of hits first). The hits section of the response is never influenced by your choice of aggregations. Similarly, you cannot find hits in the aggregation sections.
If your goal is to group documents by a certain field, yes, you will need to run multiple queries in the current Elasticsearch release.
I'm not 100% sure, but I think there's no way to do that in the current version of Elasticsearch (1.2.x). The good news is that there will be when version 1.3.x gets released:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-aggregations-metrics-top-hits-aggregation.html

Resources