Diversified results on Elasticsearch search - elasticsearch

I've done a complex query using the popularity to improve the results of social media documents using Elasticsearch.
The query works really fine and the top results are always centered on the query and with interesting elements.
However it has a problem, for some queries the first results are all from the same user.
I would like to downscore a document if same user was retrieved on a higher document. This way I expect to have more diversification on the results.
Note that I don't want them to be removed, as in some cases it may still be interesting to find more documents of the same user, but I would like them to be in a lower position.
Can anybody suggest a way to make it work?
As suggested in some comments I update a (simplified version) of my query:
query = {"function_score": {
"functions": [
{"gauss": {"createdAt":
{"origin": "now", "scale": "30d", "offset": "7d", "decay" :0.9 }
}},
{"gauss": {"shares.last.twitter_retweets_log":
{"origin": 4.52, "scale": 2.61, "decay" : 0.9}
}},
],
"query": {"bool":{"must":[
{"exists":{"field": "images"}},
{"multi_match":{"query": "foo boo", fields:["text", "link.title"]}}
]}},
"score_mode": "multiply"
}};
P.S: some documents that may be interesting, as they talk about diversity, but I'm not sure how to apply:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-bucket-sampler-aggregation.html?q=sampler
https://lucene.apache.org/core/5_1_0/misc/org/apache/lucene/search/DiversifiedTopDocsCollector.html

You can couple the sampler with the top_hits aggregation to get diversified results.
{
"query": {
"match": {
"query": "iphone"
}
},
"size":0,
"aggs": {
"sample": {
"sampler": {
"shard_size": 200,
"field" : "user.id"
},
"aggs": {
"diversifiedMatches": {
"top_hits": {
"size":10
}
}
}
}
}
}
There are some caveats e.g:
1) Deduplication is per-shard not global
2) Choice of diversification field must be a single-value field
3) No support for pagination
4) No support for sorting on anything other than score
Addressing the above issues would be hard and would require expensive/complex co-ordination internally plus more guidance from the client about when and where "duplicate" results can be re-introduced (page 2? page 3? how many?) etc.

Related

Paginate an aggregation sorted by hits on Elastic index

I have an Elastic index (say file) where I append a document every time the file is downloaded by a client. Each document is quite basic, it contains a field filename and a date when to indicate the time of the download.
What I want to achieve is to get, for each file the number of times it has been downloaded in the last 3 months. Thanks to another question, I have a query that returns all the results:
{
"query": {
"range": {
"when": {
"gte": "now-3M"
}
}
},
"aggs": {
"downloads": {
"terms": {
"field": "filename.keyword",
"size": 1000
}
}
},
"size": 0
}
Now, I want to have a paginated result. The term aggreation cannot be paginated, so I use a composite aggregation. Of course, if there is a better aggregation, it can be used here...
So for the moment, I have something like that:
{
"query": {
"range": {
"when": {
"gte": "now-3M"
}
}
},
"aggs": {
"downloads_agg": {
"composite": {
"size": 100,
"sources": [
{
"downloads": {
"terms": {
"field": "filename.keyword"
}
}
}
]
}
}
},
"size": 0
}
This aggregation allows me to paginate (thanks to after_key value in response), but it is not sorted by the number of downloads - it is sorted by the filename.
How can I sort that composite aggregation on the number of documents for each filename in my index?
Thanks.
Composite aggregation don't allow sorting based on the value field.
Excerpt from the discussion on elastic forum:
it's designed as a memory-friendly way to paginate over aggregations.
Part of the tradeoff is that you lose things like ordering by doc
count, since that isn't known until after all the docs have been
collected.
I have no experience with Transforms (part of X-pack & Licensed) but you can try that out. Apart from this, I don't see a way to get the expected output.

Elasticsearch Query for good title keyword results

We have a elasticsearch index containing a catalog of products, that we want to search by title and description.
We want it to have the following constraints:
We are searching title and description for occurences (matches in title should be twice as important as description)
We want it to have a very light fuzzy search result (but still accurate results)
Not matching results to the searchterm should not be filtered out, but only shown later (so matching results should be on top and worse results should be at the bottom)
category_id should filter products out (so no results of other categories should be shown)
The created_at attribute should be valued very high in sorting as well.
products should lose score the "older" they get. (This is very important, because they lose importance with every day)
I have tried to create a query like that, but the results are really less than accurate. Sometimes finding completely unrelated stuff. I think that's because of the wildcard query.
Also I think there must be a more elegant solution for the "created_at" scoring. Right?
I am using Elasticsearch 6.2
This is my current code. I would be happy to see an elegant solution for this:
{
"sort": [
{
"_score": {
"order": "desc"
}
}
],
"min_score": 0.3,
"size": 12,
"from": 0,
"query": {
"bool": {
"filter": {
"terms": {
"category_id": [
"212",
"213"
]
}
},
"should": [
{
"match": {
"title_completion": {
"query": "Development",
"boost": 20
}
}
},
{
"wildcard": {
"title": {
"value": "*Development*",
"boost": 1
}
}
},
{
"wildcard": {
"title_completion": {
"value": "*Development*",
"boost": 10
}
}
},
{
"match": {
"title": {
"query": "Development",
"operator": "and",
"fuzziness": 1
}
}
},
{
"range": {
"created_at": {
"gte": 1563264817998,
"boost": 11
}
}
},
{
"range": {
"created_at": {
"gte": 1563264040398,
"boost": 4
}
}
},
{
"range": {
"created_at": {
"gte": 1563256264398,
"boost": 1
}
}
}
]
}
}
}
First of all, building a request returning relevant results is usually a difficult task. It can't be done without knowing the content of the documents. That said, I can give you hints to fulfill your requirements and avoid unrelevant results.
We are searching title and description for occurences (matches in title should be twice as important as description)
You can use boost as you did in your query to give more importance to matches on title compared to description.
We want it to have a very light fuzzy search result (but still accurate results)
You should use AUTO value for the fuzzy field to define different values of fuzziness depending on the length of the term. E.g., by default terms having less than 3 letters (most common terms where a change in letter can result in a different word) will not allows changes. Terms with more than 3 letters will allow one change and more than 5 will allow 2 changes. You can change this behavior depending of your tests.
Not matching results to the searchterm should not be filtered out, but only shown later (so matching results should be on top and worse results should be at the bottom)
Use a should clause in the bool statement. Clauses in a should statements does not filter documents (unless specified otherwise). The queries in should clause are only used to increase the score.
category_id should filter products out (so no results of other categories should be shown)
Use a must of filter clause in the bool statement to ensure that all documents validate a constraint. If you don't want the subqueries to contribute to the score (I believe its your case), use filter instead of match because filter will be able to cache the results. Your query is ok for this requirement.
The created_at attribute should be valued very high in sorting as well. products should lose score the "older" they get. (This is very important, because they lose importance with every day)
You should use a function score with a decay function. If decay function are not clear for you, you can skip the equations in the document and jump to the figure which self explanatory. The following query is an example using a gauss decay function.
{
"function_score": {
// Name of the decay function
"gauss": {
// Field to use
"created_at": {
"origin": "now", // "now" is the default so you can omit this field
"offset": "1d", // Values with less than 1 day will not be impacted
"scale": "10d", // Duration for which the scores will be scaled using a gauss function
"decay" : 0.01 // Score for values further than scale
}
}
}
}
Hints for writing queries
Avoid wildcard queries: If you use * they are not efficient and will consume a lot of memory. If you want to be able to search in part of a term (finding "penthouse" when the user search "house") you should create a subfield using ngram tokenizer and write a standard match query using the subfield.
Avoid setting a minimum score: The score is a relative value. A small score or a high score does not mean that the document is relevant or not. You can read this article about the subject.
Be carefull with fuzzy queries: Fuzzy can generate a lot of noise and confuse users. In general, I would recommend to increase the default AUTO threshold for fuzzy and accept that some queries with mispelling does not return good results. Usually, it is simpler for a user to detect a mispelling in his input compared to understanding why he has completly unrelated results.
Example query
This is just an example that you will need to adapt with your data.
{
"size": 12,
"query": {
"bool": {
"filter": {
"terms": {
"category_id": <CATEGORY_IDS>
}
},
"should": [
{
"match": {
"title": {
"query": <QUERY>,
"fuzziness": AUTO:4:12,
"boost": 3
}
}
},
{
"match": {
"title_completion": {
"query": <QUERY>,
"boost": 1
}
}
},
{
"match": {
// title_completion field with ngram tokenizer
"title_completion.ngram": {
"query": <QUERY>,
// Use lower boost because it match only partially
"boost": 0.5
}
}
}
]
},
"function_score": {
// Name of the decay function
"gauss": {
// Field to use
"created_at": {
"origin": "now", // "now" is the default so you can omit this field
"offset": "1d", // Values with less than 1 day will not be impacted
"scale": "10d", // Duration for which the scores will be scaled using a gauss function
"decay" : 0.01 // Score for values further than scale
}
}
}
}
}

Elasticsearch query speed up with repeated used terms query filter

I will need to find out the co-occurrence times between one single tag and another fixed set of tags as whole. I have 10000 different single tags, and there are 10k tags inside fixed set of tags. I loop through all single tags under a fixed set of tags context with a fixed time range. I have total 1 billion documents inside the index with 20 shards.
Here is the elasticsearch query, elasticsearch 6.6.0:
es.search(index=index, size=0, body={
"query": {
"bool": {
"filter": [
{"range": {
"created_time": {
"gte": fixed_start_time,
"lte": fixed_end_time,
"format": "yyyy-MM-dd-HH"
}}},
{"term": {"tags": dynamic_single_tag}},
{"terms": {"tags": {
"index" : "fixed_set_tags_list",
"id" : 2,
"type" : "twitter",
"path" : "tag_list"
}}}
]
}
}, "aggs": {
"by_month": {
"date_histogram": {
"field": "created_time",
"interval": "month",
"min_doc_count": 0,
"extended_bounds": {
"min": two_month_start_time,
"max": start_month_start_time}
}}}
})
My question: Is there any solution which can have a cache inside elasticsearch for a fixed 10k set of tags terms query and time range filter which can speed up the query time? It took 1.5s for one single tag for my query above.
What you are seeing is normal behavior for Elasticsearch aggregations (actually, pretty good performance given that you have 1 billion documents).
There are a couple of options you may consider: using a batch of filter aggregations, re-indexing with a subset of documents, and downloading the data out of Elasticsearch and computing the co-occurrences offline.
But probably it is worth trying to send those 10K queries and see if Elasticsearch built-in caching kicks in.
Let me explain in a bit more detail each of these options.
Using filter aggregation
First, let's outline what we are doing in the original ES query:
filter documents with create_time in certain time window;
filter documents containing desired tag dynamic_single_tag;
also filter documents who have at least one tag from the list fixed_set_tags_list;
count how many such documents there are per each month in certain time period.
The performance is a problem because we've got 10K of tags to make such queries for.
What we can do here is to move filter on dynamic_single_tag from query to aggregations:
POST myindex/_doc/_search
{
"size": 0,
"query": {
"bool": {
"filter": [
{ "terms": { ... } }
]
}
},
"aggs": {
"by tag C": {
"filter": {
"term": {
"tags": "C" <== here's the filter
}
},
"aggs": {
"by month": {
"date_histogram": {
"field": "created_time",
"interval": "month",
"min_doc_count": 0,
"extended_bounds": {
"min": "2019-01-01",
"max": "2019-02-01"
}
}
}
}
}
}
}
The result will look something like this:
"aggregations" : {
"by tag C" : {
"doc_count" : 2,
"by month" : {
"buckets" : [
{
"key_as_string" : "2019-01-01T00:00:00.000Z",
"key" : 1546300800000,
"doc_count" : 2
},
{
"key_as_string" : "2019-02-01T00:00:00.000Z",
"key" : 1548979200000,
"doc_count" : 0
}
]
}
}
Now, if you are asking how this can help the performance, here is the trick: to add more such filter aggregations, for each tag: "by tag D", "by tag E", etc.
The improvement will come from doing "batch" requests, combining many initial requests into one. It might not be practical to put all 10K of them in one query, but even batches of 100 tags per query can be a game changer.
(Side note: roughly the same behavior can be achieved via terms aggregation with include filter parameter.)
This method of course requires getting hands dirty and writing a bit more complex query, but it will come handy if one needs to run such queries at random times with 0 preparation.
re-index the documents
The idea behind second method is to reduce the set of documents beforehand, via reindex API. reindex query might look like this:
POST _reindex
{
"source": {
"index": "myindex",
"type": "_doc",
"query": {
"bool": {
"filter": [
{
"range": {
"created_time": {
"gte": "fixed_start_time",
"lte": "fixed_end_time",
"format": "yyyy-MM-dd-HH"
}
}
},
{
"terms": {
"tags": {
"index": "fixed_set_tags_list",
"id": 2,
"type": "twitter",
"path": "tag_list"
}
}
}
]
}
}
},
"dest": {
"index": "myindex_reduced"
}
}
This query will create a new index, myindex_reduced, containing only elements that satisfy first 2 clauses of filtering.
At this point, the original query can be done without those 2 clauses.
The speed-up in this case will come from limiting the number of documents, the smaller it will be, the bigger the gain. So, if fixed_set_tags_list leaves you with a little portion of 1 billion, this is the option you can definitely try.
Downloading data and processing outside Elasticsearch
To be honest, this use-case looks more like a job for pandas. If data analytics is your case, I would suggest using scroll API to extract the data on disk and then process it with an arbitrary script.
In python it could be as simple as using .scan() helper method of elasticsearch library.
Why not to try the brute force approach?
Elasticsearch will already try to help you with your query via request cache. It is applied only to pure-aggregation queries (size: 0), so should work in your case.
But it will not, because the content of the query will always be different (the whole JSON of the query is used as caching key, and we have a new tag in every query). A different level of caching will start to play.
Elasticsearch heavily relies on the filesystem cache, which means that under the hood the more often accessed blocks of the filesystem will get cached (practically loaded into RAM). For the end-user it means that "warming up" will come slowly and with volume of similar requests.
In your case, aggregations and filtering will occur on 2 fields: create_time and tags. This means that after doing maybe 10 or 100 requests, with different tags, the response time will drop from 1.5s to something more bearable.
To demonstrate my point, here is a Vegeta plot from my study of Elasticsearch performance under the same query with heavy aggregations sent with fixed RPS:
As you can see, initially the request was taking ~10s, and after 100 requests it diminished to brilliant 200ms.
I would definitely suggest to try this "brute force" approach, because if it works it is good, if it does not - it costed nothing.
Hope that helps!

Give more score to documents that contains all query terms

I have a problem with scoring in elasticsearch. When user enter a query that contains 3 terms, sometimes a document that has two words a lot, outscores a document that contains all three words. for example if user enters "elasticsearch query tutorial", I want documents that contains all these words score higher than a document with a lot of "tutorial" and "elasticsearch" terms in it.
PS: I am using minimum should match and shingls in my query. also they made ranking a lot better, they did not solve this problem completely. I need something like query coordination in lucene's practical scoring function. is there anything like that in elastic with BM-25?
One of the possible solutions could be using function score:
{
"query": {
"function_score": {
"query": { "match_all": {} },
"functions": [
{
"filter": { "match": { "title": "elasticserch" } },
"weight": 1
},
{
"filter": { "match": { "title": "tutorial" } },
"weight": 1
}
],
"score_mode": "sum"
}
}
}
In this case, you would have clearly a better position for documents with more matches. However, this would completely ignore TF-IDF or any other parameters.

What differs between post-filter and global aggregation for faceted search?

A common problem in search interfaces is that you want to return a selection of results,
but might want to return information about all documents. (e.g. I want to see all red shirts, but want to know what
other colors are available).
This is sometimes referred to as "faceted results", or
"faceted navigation". the example from the Elasticsearch reference is quite clear in explaining why / how, so
I've used this as a base for this question.
Summary / Question: It looks like I can use both a post-filter or a global aggregation for this. They both seem to
provide the exact same functionality in a different way. There might be advantages or disadvantages to them that I
don't see? If so, which should I use?
I have included a complete example below with some documents and a query with both types of method based on the example
in the reference guide.
Option 1: post-filter
see the example from the Elasticsearch reference
What we can do is have more results in our origional query, so we can aggregate 'on' those results, and afterwards
filter our actual results.
The example is quite clear in explaining it:
But perhaps you would also like to tell the user how many Gucci shirts are available in other colors. If you just add a terms aggregation on the color field, you will only get back the color red, because your query returns only red shirts by Gucci.
Instead, you want to include shirts of all colors during aggregation, then apply the colors filter only to the search results.
See for how this would look below in the example code.
An issue with this is that we cannot use caching. This is in the (not yet available for 5.1) elasticsearch guide warned about:
Performance consideration
Use a post_filter only if you need to differentially filter search results and aggregations. Sometimes people will use post_filter for regular searches.
Don’t do this! The nature of the post_filter means it runs after the query, so any performance benefit of filtering (such as caches) is lost completely.
The post_filter should be used only in combination with aggregations, and only when you need differential filtering.
There is however a different option:
Option 2: global aggregations
There is a way to do an aggregation that is not influenced by the search query.
So instead of getting a lot, aggregate on that, then filter, we just get our filtered results, but do aggregations on
everything. Take a look at the reference
We can get the exact same results. I did not read any warnings about caching for this, but it seems like in the end
we need to do about the same amount of work. So that maybe the only ommission.
It is a tiny bit more complicated because of the sub-aggregation we need (you can't have global and a filter on the
same 'level').
The only complaint I read about queries using this, is that you might have to repeat yourself if you need to do this
for several items. In the end we can generate most queries, so repeating oneself isn't that much of an issue for my usecase,
and I do not really consider this an issue on par with "can not use cache".
Question
It seems both functions are overlapping in the least, or possibly providing the exact same functionality. This baffles me.
Apart from that, I'd like to know if one or the other has an advantage I haven't seen, and if there is any best practice here?
Example
This is largely from the post-filter reference page, but I added the global filter query.
mapping and documents
PUT /shirts
{
"mappings": {
"item": {
"properties": {
"brand": { "type": "keyword"},
"color": { "type": "keyword"},
"model": { "type": "keyword"}
}
}
}
}
PUT /shirts/item/1?refresh
{
"brand": "gucci",
"color": "red",
"model": "slim"
}
PUT /shirts/item/2?refresh
{
"brand": "gucci",
"color": "blue",
"model": "slim"
}
PUT /shirts/item/3?refresh
{
"brand": "gucci",
"color": "red",
"model": "normal"
}
PUT /shirts/item/4?refresh
{
"brand": "gucci",
"color": "blue",
"model": "wide"
}
PUT /shirts/item/5?refresh
{
"brand": "nike",
"color": "blue",
"model": "wide"
}
PUT /shirts/item/6?refresh
{
"brand": "nike",
"color": "red",
"model": "wide"
}
We are now requesting all red gucci shirts (item 1 and 3), the types of shirts we have (slim and normal) for these 2 shirts,
and which colors gucci there are (red and blue).
First, a post filter: get all shirts, aggregate the models for red gucci shirts and the colors for gucci shirts (all colors),
and post-filter for red gucci shirts to show only those as results: (this is a bit different from the example, as we
try to get it as close to a clear application of postfilters as possilbe.)
GET /shirts/_search
{
"aggs": {
"colors_query": {
"filter": {
"term": {
"brand": "gucci"
}
},
"aggs": {
"colors": {
"terms": {
"field": "color"
}
}
}
},
"color_red": {
"filter": {
"bool": {
"filter": [
{
"term": {
"color": "red"
}
},
{
"term": {
"brand": "gucci"
}
}
]
}
},
"aggs": {
"models": {
"terms": {
"field": "model"
}
}
}
}
},
"post_filter": {
"bool": {
"filter": [
{
"term": {
"color": "red"
}
},
{
"term": {
"brand": "gucci"
}
}
]
}
}
}
We could also get all red gucci shirts (our origional query), and then do a global aggregation for the model (for all
red gucci shirts) and for color (for all gucci shirts).
GET /shirts/_search
{
"query": {
"bool": {
"filter": [
{ "term": { "color": "red" }},
{ "term": { "brand": "gucci" }}
]
}
},
"aggregations": {
"color_red": {
"global": {},
"aggs": {
"sub_color_red": {
"filter": {
"bool": {
"filter": [
{ "term": { "color": "red" }},
{ "term": { "brand": "gucci" }}
]
}
},
"aggs": {
"keywords": {
"terms": {
"field": "model"
}
}
}
}
}
},
"colors": {
"global": {},
"aggs": {
"sub_colors": {
"filter": {
"bool": {
"filter": [
{ "term": { "brand": "gucci" }}
]
}
},
"aggs": {
"keywords": {
"terms": {
"field": "color"
}
}
}
}
}
}
}
}
Both will return the same information, the second one only differs because of the extra level introduced by the sub-aggregations. The second query looks a bit more complex, but I don't think this is very problematic. A real world query is generated by code, probably way more complex anyway and it should be a good query and if that means complicated, so be it.
The actual solution we used, while not a direct answer to the question, is basically "neither".
From this elastic blogpost we got the initial hint:
Occasionally, I see an over-complicated search where the goal is to do as much as possible in as few search requests as possible. These tend to have filters as late as possible, completely in contrary to the advise in Filter First. Do not be afraid to use multiple search requests to satisfy your information need. The multi-search API lets you send a batch of search requests.
Do not shoehorn everything into a single search request.
And that is basically what we are doing in above query: a big bunch of aggregations and some filtering.
Having them run in parallel proved to be much and much quicker. Have a look at the multi-search API
In both cases Elasticsearch will end up doing mostly the same thing. If I had to choose, I think I'd use the global aggregation, which might save you some overhead from having to feed two Lucene collectors at once.

Resources