Ordering term aggregation buckets by sub-aggregration result values - elasticsearch

I have two questions about the query seen on this capture:
How do I order by value in the sum_category field in the results?
I use respsize again in the query but it's not correct as you can see below.
Even if I make only an aggregration, why do all the documents come with the result? I mean, if I make a group by query in SQL it retrieves only grouped data, but Elasticsearch retrieves all documents as if I made a normal search query. How do I skip them?

Try this:
{
"query" : {
"match_all" : {}
},
"size" : 0,
"aggs" : {
"categories" : {
"terms" : {
"field" : "category",
"size" : 999999,
"order" : {
"sum_category" : "desc"
}
},
"aggs" : {
"sum_category" : {
"sum" : {
"field" : "respsize"
}
}
}
}
}
}
1). See the note in (2) for what your sort is doing. As for ordering the categories by the value of sum_category, see the order portion. There appears to be an old and closed issue related to that https://github.com/elastic/elasticsearch/issues/4643 but it worked fine for me with v1.5.2 of Elasticsearch.
2). Although you do not have that match_all query, I think that's probably what you are getting results for. And so the sort your specified is actually getting applied to those results. To not get these back, I just have size: 0 portion.
Do you want buckets for all the categories? I noticed you do not have size specified for the main aggregation. That's the size: 999999 portion.

Related

How to limit elasticsearch to a list of documents each identified by a unique keyword

I have an elasticsearch document repository with ~15M documents.
Each document has an unique 11-char string field (comes from a mongo DB) that is unique to the document. This field is indexed as keyword.
I'm using C#.
When I run a search, I want to be able to limit the search to a set of documents that I specify (via some list of the unique field ids).
My query text uses bool with must to supply a filter for the unique identifiers and additional clauses to actually search the documents. See example below.
To search a large number of documents, I generate multiple query strings and run them concurrently. Each query handles up to 64K unique ids (determined by the limit on terms).
In this case, I have 262,144 documents to search (list comes, at run time, from a separate mongo DB query). So my code generates 4 query strings (see example below).
I run them concurrently.
Unfortunately, this search takes over 22 seconds to complete.
When I run the same search but drop the terms node (so it searches all the documents), a single such query completes the search in 1.8 seconds.
An incredible difference.
So my question: Is there an efficient way to specify which documents are to be searched (when each document has a unique self-identifying keyword field)?
I want to be able to specify up to a few 100K of such unique ids.
Here's an example of my search specifying unique document identifiers:
{
"_source" : "talentId",
"from" : 0,
"size" : 10000,
"query" : {
"bool" : {
"must" : [
{
"bool" : {
"must" : [ { "match_phrase" : { "freeText" : "java" } },
{ "match_phrase" : { "freeText" : "unix" } },
{ "match_phrase" : { "freeText" : "c#" } },
{ "match_phrase" : { "freeText" : "cnn" } } ]
}
},
{
"bool" : {
"filter" : {
"bool" : {
"should" : [
{
"terms" : {
"talentId" : [ "goGSXMWE1Qg", "GvTDYS6F1Qg",
"-qa_N-aC1Qg", "iu299LCC1Qg",
"0p7SpteI1Qg", ... 4,995 more ... ]
}
}
]
}
}
}
}
]
}
}
}
#jarmod is right.
But if you don't wanna completely redo your architecture, is there some other single talent-related shared field you could query instead of thousands of talendIds? It could be one more simple match_phrase query.

How to apply aggregations on grouped fields in Elasticsearch?

On my eCommerce store I want to only include the first item in each group (grouped by item_id) in the final results. At the same time I don't want to lose my aggregations (little numbers next to attributes that indicate how many items with that attribute are found).
Here is a little example:
Suppose I make a search for items and only 25 show up. This is the result for the color aggregation that I currently get:
black (65)
green (32)
white (13)
And I want it to be:
black (14)
green (6)
white (5)
The numbers should amount to the total number the user actually sees on the page.
How could I achieve that with Elasticsearch? I have tried both Grouping (Top Hits) and Field Collapsing and both don't seem to fit my use case. Solr does it almost by default with its Grouping functionality.
It should be rather easy. When you are asking for aggregation you are simple sending request to the _search endpoint. Example:
POST /exams/_search
{
"aggs" : {
"avg_grade" : { "avg" : { "field" : "grade" } }
}
}
and in above example you will get aggregation for all the documents.
If you want to get aggregation for specific documents you just need to add specific query to the request body, like:
POST /exams/_search
{
"query": {
"bool" : {
"must" : {
"query_string" : {
"query" : "some query string here"
}
},
"filter" : {
"term" : { "user" : "kimchy" }
}
}
},
"aggs" : {
"avg_grade" : { "avg" : { "field" : "grade" } }
}
}
and you can send size and from parameters as well.

applying query on aggregation results in elasticsearch?

I have an index in ES which has multiple documents. There is a field software_tags for which I wanted to have unique values. I applied term aggregation for that purpose as below:
GET /record_new/_search
{"size":0,
"aggs" : {
"software_tags" : {
"terms" : {
"field" : "software_tags.keyword",
"size" : 10000,
"order" : { "_term" : "asc" }
}
}
}
}
Now once I have all the unique values, I want to use these values as a search functionality, as in, if we have software_tags with "Windows" word in it, I want that the user gets all those tags when he searches as any of such combinations as in:
win, Windows, window, Window, windows
Kind of a search functionality. So basically I want to apply a case insensitive query to search on the aggregate results. How can this be done.

How to get elasticsearch most used words?

I am using terms aggregation on elasticsearch to get most used words in a index with 380607390 (380 millions) and i receive timeout on my application.
The aggregated field is a text with a simple analyzer( the field holds post content).
My question is:
The terms aggregation is the correct aggregation to do that? With a large content field?
{
"aggs" : {
"keywords" : {
"terms" : { "field" : "post_content" }
}
}
}
You can try this using min_doc_count. You would ofcourse not want to get those words which have been used just once or twice or thrice...
You can set min_doc_count as per your requirement. This would definitely
reduce the time.
{
"aggs" : {
"keywords" : {
"terms" : { "field" : "post_content",
"min_doc_count": 5 //----->Set it as per your need
}
}
}
}

elasticsearch number of facets returned

I have faceted queries working with elasticsearch 0.19.9. However I would like to return all facets that have a count > 0.
According to the documentation I should be able to:
{
"query" : {
"match_all" : { }
},
"facets" : {
"tag" : {
"terms" : {
"field" : "tag",
"all_terms" : true
}
}
}
}
As I understand, this should give me all facets even if count is 0.
However, this is still only returning the top 10 facets by count. Which is the default size. The only thing that seems to affect the number of returned facets is by actually setting "size" : N where N is the number of facets which will be returned.
I could set this to a really high number but that just seems to hack-ish.
Any ideas as to what I may be doing wrong?
You're not doing anything wrong. You figured it out correctly! There is an open issue on github to make the terms facet similar to the Terms Stats facet which allows you to set size=0 in order to get all the terms back. For now you just need to use an high value, which is a bit tricky, I agree. On the other hand be careful not to return too many entries!
{
"query" : {
"match_all" : { }
},
"facets" : {
"tag" : {
"terms" : {
"field" : "tag",
"size" : 2147483647,
"all_terms" : false
}
}
}
}
The only way to remove the "count: 0" is to put "all_terms" as false, and set your size number as high and as impossible as you can in your Elasticsearch instance (the example above is the largest signed value that an integer in PHP can have).
It may not be the best way, but this is the only known approach so far. Facet filter doesn't work with this at present (unless they updated and improved Elasticsearch to do it).

Resources