I am creating a report to compare actual count from database with indexed records.
I have three indices index1, index2 and index3
To get the count for a single index i am using the following URL
http://localhost:9200/index1/_count?q=_type:invoice
=> {"count":50,"_shards":{"total":5,"successful":5,"failed":0}}
For multiple indices:
http://localhost:9200/index1,index2/_count?q=_type:invoice
=> {"count":80,"_shards":{"total":5,"successful":5,"failed":0}}
Now the count is added up, i want it to grouped by index also how can i pass filters group by a specific field
to get the output like this:
{"index1_count":50,"index2_count":50,"approved":10,"rejected":40 ,"_shards":{"total":5,"successful":5,"failed":0}}
You can use _search?search_type=count and do an aggregation based on _index field to make the distinction between the indices:
GET /index1,index2/_search?search_type=count
{
"aggs": {
"by_index": {
"terms": {
"field": "_index"
}
}
}
}
and the result would be something like this:
"aggregations": {
"by_index": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "index1",
"doc_count": 50
},
{
"key": "index2",
"doc_count": 80
}
]
}
}
Related
I am looking for a way to group aggregation results so I can filter them down. Currently my response is pretty large (> 1mb) and I'm hoping to return only the top matching filters.
I'm not sure if Elastic is capable of grouping aggregations by the sub aggregation without using nesting, but I figured I would give it a try.
The filter data is stored in an array on each of my objects:
// document a
"attributeValues" : [
"A12345|V12345",
"A22345|V22345",
...
]
// document b
"attributeValues" : [
"A12345|V15555",
"A22345|V22345",
...
]
I am currently aggregating on the values and getting results like this:
{
"key": "A12345|V12345",
"doc_count": 10
},
{
"key": "A12345|V15555",
"doc_count": 7
},
{
"key": "A22345|V22345",
"doc_count": 5
},
I would like to be able to group these aggregations by the first part of the string so that I can return only the top 10 matches and get something like this:
"topAttributes" : {
"buckets" : [
{
"key" : "A12345",
"doc_count" : 17,
"attributes" : {
"buckets" : [
{
"key": "A12345|V12345",
"doc_count": 10
},
{
"key": "A12345|V15555",
"doc_count": 7
},
I have tried to filter using the field script but I cannot seem to find anywhere online (checked many questions) to get the sub-aggregation's results.
The script would look something like this:
GET test_index/_search
{
"size" : 0,
"aggs": {
"attributeValuesTop": {
"terms": {
"size": 10,
"script": {
"source": """
return attributes.splitOnToken('|')[1];
"""
}
},
"aggs": {
"attributes": {
"terms": {
"field": "attributeValues",
"size": 10000
}
}
}
}
}
}
NOTE: I know we can use a nested solution, but nested is too slow for the amount of documents we have (millions of records) and the target of sub 300ms searches.
I'm using a elasticsearch terms aggregation to bucket based on an array property on each document. I'm running into an issue where I get back buckets that are not in my query, and I'd like to those filter out.
Let's say each document is a Post, and has an array property media which specifies which social media website the post is on (and may be empty):
{
id: 1
media: ["facebook", "twitter", "instagram"]
}
{
id: 2
media: ["twitter", "instagram", "tiktok"]
}
{
id: 3
media: ["instagram"]
}
{
id: 4
media: []
}
And, let's say there's another index of Users, which stores a favorite_media property of the same type.
{
id: 42
favorite_media: ["twitter", "instagram"]
}
I have a query uses a terms lookup to filter, then does a terms aggregation.
{
"query": {
"filter": {
"terms": {
"index": "user_index",
"id": 42,
"path": "favorite_media"
}
}
},
"aggs": {
"Posts_by_media": {
"terms": {
"field": "media",
"size": 1000
}
}
}
}
This will result in:
{
...
"aggregations": {
"Posts_by_media": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "instagram",
"doc_count": 3
},
{
"key": "twitter",
"doc_count": 2
},
{
"key": "facebook",
"doc_count": 1
},
{
"key": "tiktok",
"doc_count": 1
}
]
}
}
}
Because media is an array property, any document that matches the filter will be used to create buckets, and I'll have buckets that don't match my filter. Here I want to only get back buckets facebook and instagram, since those are the two that I'm filtering to (via the terms-lookup).
I know terms aggregations offer a includes ability, but that doesn't work for me here since I'm using a terms-lookup, and don't know the data in favorite_media at query time.
How can I limit my buckets to be only those that match the filters in my query?
Thank you for your help!
I recently started working on ElasticSearch, and I am trying search for following criteria
I want to apply exact match on ENAME & distinct on both EID & ENAME on above data.
Let say for matching, I have string ABC.
So result should be like as below
[
{"EID" :111, "ENAME" : "ABC"},
{"EID" : 444, "ENAME" : "ABC"}
]
You can achieve this via a combination of term query and terms aggregation.
Assuming that you have the following mapping:
PUT my_index
{
"mappings": {
"doc": {
"properties": {
"EID": {
"type": "keyword"
},
"ENAME": {
"type": "keyword"
}
}
}
}
}
And inserted the documents like this:
POST my_index/doc/3
{
"EID": "111",
"ENAME": "ABC"
}
POST my_index/doc/4
{
"EID": "222",
"ENAME": "XYZ"
}
POST my_index/doc/12
{
"EID": "444",
"ENAME": "ABC"
}
The query that will do the job might look like this:
POST my_index/doc/_search
{
"query": {
"term": { 1️⃣
"ENAME": "ABC"
}
},
"size": 0, 3️⃣
"aggregations": {
"by EID": {
"terms": { 2️⃣
"field": "EID"
}
}
}
}
Let me explain how it works:
1️⃣ - term query asks Elasticsearch to filter on exact value of a keyword field "ENAME";
2️⃣ - terms aggregation collects the list of all possible values of another keyword field "EID" and gives back the first N most frequent ones;
3️⃣ - "size": 0 tells Elasticsearch not to return any search hits (we are only interested in the aggregations).
The output of the query will look like this:
{
"hits": {
"total": 2,
"max_score": 0,
"hits": []
},
"aggregations": {
"by EID": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "111", <== Here is the first "distinct" value that we wanted
"doc_count": 3
},
{
"key": "444", <== Here is another "distinct" value
"doc_count": 2
}
]
}
}
}
The output does not look exactly like what you posted in the question, but I believe it is the closest what you can achieve with Elasticsearch.
However, this output is equivalent:
"ENAME" is implicitly present (since its value was used for filtering)
"EID" is present under the "buckets" of the aggregations section.
Note that under "doc_count" you will find the number of documents having such "EID".
What if I want to do a DISTINCT on several fields?
For a more complex scenario (e.g. when you need to do a distinct on many fields) see this answer.
More information about aggregations is available here.
Hope that helps!
I want to get the unique values from elasticsearch in the field named "name",
i do not know how can i put the condition where the values have to be unique.
The purpose of this work is the fetch all the unique names from the elasticsearch database.
So basically what i need is a aggregation query that fetch the unique values
Can someone help me to fix this issue, thanks a lot in advanced.
You can use a terms aggregation on a field which is not_analyzed.
However, this is by default limited to the 10 most popular terms. You can change this by updating the size parameter of the terms aggregation. Setting it to 0 will allow you to have up to Integer.MAX_VALUE different terms (see the documentation here).
Here is an example mapping:
POST terms
{
"mappings":{
"test":{
"properties":{
"title":{
"type":"string",
"index":"not_analyzed"
}
}
}
}
}
Adding some documents :
POST terms/test
{
"title":"Foundation"
}
POST terms/test
{
"title":"Foundation & Empire"
}
Finally, the request :
POST terms/_search?search_type=count
{
"aggs": {
"By Title": {
"terms": {
"field": "title",
"size": 0
}
}
}
}
will give you what you need :
"aggregations": {
"By Title": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Foundation",
"doc_count": 1
},
{
"key": "Foundation & Empire",
"doc_count": 1
}
]
}
}
Be aware that if you have a large number of terms, this request will be very expensive to execute.
For some of my queries to ElasticSearch I want three pieces of information back:
Which terms T occurred in the result document set?
How often does each element of T occur in the result document set?
How often does each element of T occur in the entire index (--> document frequency)?
The first points are easily determined using the default term facet or, nowadays, by the term aggregation method.
So my question is really about the third point.
Before ElasticSearch 1.x, i.e. before the switch to the 'aggregation' paradigm, I could use a term facet with the 'global' option set to true and a QueryFilter to get the document frequency ('global counts') of the exact terms occurring in the document set specified by the QueryFilter.
At first I thought I could do the same thing using a global aggregation, but it seems I can't. The reason is - if I understand correctly - that the original facet mechanism were centered around terms whereas the aggregation buckets are defined by the the set of documents belonging to each bucket.
I.e. specifying the global option of a term facet with a QueryFilter first determined the terms hit by the filter and then computed facet values. Since the facet was global I would receive the document counts.
With aggregations, it's different. The global aggregation can only be used as a top aggregation, causing the aggregation to ignore the current query results and compute the aggregation - e.g. a terms aggregation - on all documents in the index. So for me, that's too much, since I WANT to restrict the returned terms ('buckets') to the terms in the document result set. But if I use a filter-sub-aggregation with a terms-sub-aggregation, I would restrict the term-buckets to the filter again, thus not retrieving the document frequencies but normal facet counts. The reason is that the buckets are determined after the filter so they are "too small". But I don't want restrict bucket size, I want to restrict the buckets to the terms in the query result set.
How can I get the document frequency of those terms in a query result set using aggregations (since facets are deprecated and will be removed)?
Thanks for your time!
EDIT: Here comes an example of how I tried to achieve the desired behaviour.
I will define two aggregations:
global_agg_with_filter_and_terms
global_agg_with_terms_and_filter
Both have a global aggregation at their tops because its the only valid position for it. Then, in the first aggregation, I first filter the results to the original query and then apply a term-sub-aggregation.
In the second aggregation, I do mostly the same, only that here the filter aggregation is a sub-aggregation of the terms aggregation. Hence the similar names, only the order of aggregation differs.
{
"query": {
"query_string": {
"query": "text: my query string"
}
},
"aggs": {
"global_agg_with_filter_and_terms": {
"global": {},
"aggs": {
"filter_agg": {
"filter": {
"query": {
"query_string": {
"query": "text: my query string"
}
}
},
"aggs": {
"terms_agg": {
"terms": {
"field": "facets"
}
}
}
}
}
},
"global_agg_with_terms_and_filter": {
"global": {},
"aggs": {
"document_frequency": {
"terms": {
"field": "facets"
},
"aggs": {
"term_count": {
"filter": {
"query": {
"query_string": {
"query": "text: my query string"
}
}
}
}
}
}
}
}
}
}
Response:
{
"took": 18,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 221,
"max_score": 0.9839197,
"hits": <omitted>
},
"aggregations": {
"global_agg_with_filter_and_terms": {
"doc_count": 1978,
"filter_agg": {
"doc_count": 221,
"terms_agg": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "fid8",
"doc_count": 155
},
{
"key": "fid6",
"doc_count": 40
},
{
"key": "fid9",
"doc_count": 10
},
{
"key": "fid5",
"doc_count": 9
},
{
"key": "fid13",
"doc_count": 5
},
{
"key": "fid7",
"doc_count": 2
}
]
}
}
},
"global_agg_with_terms_and_filter": {
"doc_count": 1978,
"document_frequency": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "fid8",
"doc_count": 1050,
"term_count": {
"doc_count": 155
}
},
{
"key": "fid6",
"doc_count": 668,
"term_count": {
"doc_count": 40
}
},
{
"key": "fid9",
"doc_count": 67,
"term_count": {
"doc_count": 10
}
},
{
"key": "fid5",
"doc_count": 65,
"term_count": {
"doc_count": 9
}
},
{
"key": "fid7",
"doc_count": 63,
"term_count": {
"doc_count": 2
}
},
{
"key": "fid13",
"doc_count": 55,
"term_count": {
"doc_count": 5
}
},
{
"key": "fid10",
"doc_count": 11,
"term_count": {
"doc_count": 0
}
},
{
"key": "fid11",
"doc_count": 9,
"term_count": {
"doc_count": 0
}
},
{
"key": "fid12",
"doc_count": 5,
"term_count": {
"doc_count": 0
}
}
]
}
}
}
}
At first, please have a look at the first two returned term-buckets of both aggregations, with keys fid8 and fid6. We can easily see that those terms have been appearing in the result set 155 and 40 times, respectively. Now please look at the second aggregation, global_agg_with_terms_and_filter. The terms-aggregation is within the scope of the global aggregation, so here we can actually see the document frequencies, 1050 and 668, respectively. So this part looks good. The issue arises when you scan the list of term buckets further down, to the buckets with the keys fid10 to fid12. While we receive their document frequency, we can also see that their term_count is 0. This is due to the fact that those terms did not occur in our query, that we also used for the filter-sub-aggregation. So the problem is that for ALL terms (global scope!) their document frequency and their facet count with regards to the actual query result is returned. But I need this to be made exactly for the terms that occurred in the query result, i.e. for those exact terms returned by the first aggregation global_agg_with_filter_and_terms.
Perhaps there is a possibity to define some kind of filter that removes all buckets where their sub-filter-aggregation term_count has a zero doc_count?
Hello and sorry if the answer is late.
You should have a look at the Significant Terms aggregation as, like the terms aggregation, it returns one bucket for each term occuring in the results set with the number of occurences available through doc_count, but you also get the number of occurrences in a background set through bg_count. This means it only creates buckets for terms appearing in documents of your query results set.
The default background set comprises all documents in the query scope, but can be filtered down to any subset you want using background_filter.
You can use a scripted bucket scoring function to rank the buckets the way you want by combining several metrics:
_subset_freq: number of documents the term appears in the results set,
_superset_freq: number of documents the term appears in the background set,
_subset_size: number of documents in the results set,
_superset_size: number of documents in the background set.
Request:
{
"query": {
"query_string": {
"query": "text: my query string"
}
},
"aggs": {
"terms": {
"significant_terms": {
"script": "_subset_freq",
"size": 100
}
}
}
}