Difference between Elasticsearch Range Query and Range Filter - elasticsearch

I want to query elasticsearch documents within a date range. I have two options now, both work fine for me. Have tested both of them.
1. Range Query
2. Range Filter
Since I have a small data set for now, I am unable to test the performance for both of them. What is the difference between these two? and which one would result in faster retrieval of documents and faster response?

The main difference between queries and filters has to do with scoring. Queries return documents with a relative ranked score for each document. Filters do not. This difference allows a filter to be faster for two reasons. First, it does not incur the cost of calculating the score for each document. Second, it can cache the results as it does not have to deal with possible changes in the score from moment to moment - it's just a boolean really, does the document match or not?
From the documentation:
Filters are usually faster than queries because:
they don’t have to calculate the relevance _score for each document — 
the answer is just a boolean “Yes, the document matches the filter” or
“No, the document does not match the filter”. the results from most
filters can be cached in memory, making subsequent executions faster.
As a practical matter, the question is do you use the relevance score in any way? If not, filters are the way to go. If you do, filters still may be of use but should be used where they make sense. For instance, if you had a language field (let's say language: "EN" as an example) in your documents and wanted to query by language along with a relevance score, you would combine a query for the text search along with a filter for language. The filter would cache the document ids for all documents in english and then the query could be applied to that subset.
I'm over simplifying a bit, but that's the basics. Good places to read up on this:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-filtered-query.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/query-dsl-filtered-query.html
http://exploringelasticsearch.com/searching_data.html
http://elasticsearch-users.115913.n3.nabble.com/Filters-vs-Queries-td3219558.html

Filters are cached so they are faster!
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/filter-caching.html

Related

How does Solr's spellcheck.collate influence performance?

The Solr documentation on spellchecking parameters states (emphasis mine):
spellcheck.collate
If true, this parameter directs Solr to take the best suggestion for
each token (if one exists) and construct a new query from the
suggestions. [...]
The spellcheck.collate parameter only returns collations that are guaranteed to result in hits if re-queried, even when applying
original fq parameters. This is especially helpful when there is more
than one correction per query.
This only returns a query to be used. It does not actually run the suggested query.
I'd imagine in order to decide if the corrected terms yield a result Solr still has to run a variant of the original query in the background. Sure, it can ignore most parts of the original query like grouping and does not have to compute the relevance of results, but it still will have to perform the whole filter query, stemming, fuzzy search etc.
So can I expect spellcheck.collate to have a performance impact depending on the complexity of my filter query and certain other parts of the original query?

Compare Elasticsearch query score across multiple queries

I'm trying to query and compare two MLT queries scores but am a bit confused based on what I read here
https://www.elastic.co/guide/en/elasticsearch/guide/current/practical-scoring-function.html
Even though the intent of the query norm is to make results from
different queries comparable, it doesn’t work very well. The only
purpose of the relevance _score is to sort the results of the current
query in the correct order. You should not try to compare the
relevance scores from different queries.
if I ran an MLT query and document 'A' is similar to document 'B' and the score is 0.4 and conversely,
running the MLT query document 'B' is similar to document 'A' and its score is 2.4.
I would expect the score to be the same based on the tokens matched in the MLT, but that's not the case.
Also,
if I ran an MLT query and document 'A' is similar to document 'B' and the score is 0.6 and
running another MLT query document 'C' is similar to document 'A' and its score is 4.7.
So my questions are:
Does this imply that C is much more similar to A than B ?
Also, what's the best way for me compare multiple queries in elasticsearch when the scores are different?
Thanks,
- Phil
1.
No, It doesn't. As you noted in your question, you should not compare the scores of different queries. If you want to get a meaningful result of which documents are most similar to C, you should generate an MLT query for document C, and search with that.
This is made doubly true due to how MLT queries work. MLT attempts to generate a list of interesting terms to search for from your document (based on the library of terms in the index), and searches for them. The set of terms generated from doc A may be much different than that generated from Document B, thus the wildly different scores when when finding A from B, and vice-versa, even though the documents themselves will obviously have the same overlap.
2.
Don't. Listen to the docs. Scores are only designed to rank how well documents match the query that generated them. Using them outside that context is not meaningful. Rethink what you are trying to accomplish.

List items is some indices first in Elasticsearch search results

I'm scraping few sites and relisting their products, each site has their own index in Elasticsearch. Some sites have affiliate programs, I'd like to list those first in my search results.
Is there a way for me to "boost" results from a certain index?
Should I write a field hasAffiliate: true into ES when I'm scraping and then boosting the query clauses that have that has that value? Or is there a better way?
Using boost could be difficult to guarantee that they appear first in the search. According to the official guide:
Practically, there is no simple formula for deciding on the “correct”
boost value for a particular query clause. It’s a matter of
try-it-and-see. Remember that boost is just one of the factors
involved in the relevance score
https://www.elastic.co/guide/en/elasticsearch/guide/current/query-time-boosting.html
It depends on the type of queries you are doing, but here you have other couple of options:
A score function with weights: could be a more predictable option.
Simply using a sort by hasAffiliate (the easiest one).
Note: Not sure if sorting by boolean field is possible, in that case you could set hasAffiliate mapping as integer byte (smallest one), setting it as 1 when true.

Solr Boosting Logic Concepts

I'm trying to understand boosting and if boosting is the answer to my problem.
I have an index and that has different types of data.
EG: Index Animals. One of the fields is animaltype. This value can be Carnivorous, herbivorous etc.
Now when a we query in search, I want to show results of type carnivorous at top, and then the herbivorous type.
Also would it be possible to show only say top 3 results from a type and then remaining from other types?
Let assume for a herbivourous type we have a field named vegetables. This will have values only for a herbivourous animaltype.
Now, can it be possible to have boosting rules specified as follows:
Boost Levels:
animaltype:Carnivorous
then animaltype:Herbivorous and vegatablesfield: spinach
then animaltype:herbivoruous and vegetablesfield: carrot
etc. Basically boosting on various fields at various levels. Im new to this concept. It would really helpful to get some inputs/guidance.
Thanks,
Kasturi Chavan
Your example is closer to sorting than boosting, as you have a priority list for how important each document is - while boosting (in Solr) is usually applied a bit more fluent, meaning that there is no hard line between documents of type X and type Y.
However - boosting with appropriately large values will in effect give you the same result, putting the documents into different score "areas" which will then give you the sort order you're looking for. You can see the score contributed by each term by appending debugQuery=true to your query. Boosting says that 'a document with this value is z times more important than those with a different value', but if the document only contains low scoring tokens from the search (usually words that are very common), while other documents contain high scoring tokens (words that are infrequent), the latter document might still be considered more important.
Example: Searching for "city paris", where most documents contain the word 'city', but only a few contain the word 'paris' (but does not contain city). Even if you boost all documents assigned to country 'germany', the score contributed from city might still be lower - even with the boost factor than what 'paris' contributes alone. This might not occur in real life, but you should know what the boost actually changes.
Using the edismax handler, you can apply the boost in two different ways - one is to use boost=, which is multiplicative, or to use either bq= or bf=, which are additive. The difference is how the boost contributes to the end score.
For your example, the easiest way to get something similar to what you're asking, is to use bq (boost query):
bq=animaltype:Carnivorous^1000&
bq=animaltype:Herbivorous^10
These boosts will probably be large enough to move all documents matching these queries into their own buckets, without moving between groups. To create "different levels" as your example shows, you'll need to tweak these values (and remember, multiple boosts can be applied to the same document if something is both herbivorous and eats spinach).
A different approach would be to create a function query using query, if and similar functions to result in a single integer value that you can use as a sorting value. You can also calculate this value when indexing the document if it's static (which your example is), and then sort by that field instead. It will require you to reindex your documents if the sorting values change, but it might be an easy and effective solution.
To achieve the "Top 3 results from a type" you're probably going to want to look at Result grouping support - which makes it possible to get "x documents" for each value in a single field. There is, as far as I know, no way to say "I want three of these at the top, then the rest from other values", except for doing multiple queries (and excluding the three you've already retrieved from the second query). Usually issuing multiple queries works just as fine (or better) performance wise.

Way to factor in search locality in Solr/Elasticsearch/Sphinx?

My problem is to search data of thousands of users, e.g. mailboxes. Almost all the time search is filtered by user id. How this locality of searches could be taken into consideration? I'm trying to achieve performance comparable to a case where each user has dedicated index.
Sharding is not an option because it will be used (total number of users ~ 1M), and I'm looking for a solution to use inside a shard of ~4k users.
Well it can be done in Sphinx with Attributes. Most of the time can make the search more efficient by adding the user-id as a fake keyword too*. Then the documents can be filtered during the full-text stage. (still keep the attribute too, so as avoid possibility of manipulating results by constructing a careful query to return results from other users)
eg, add _user1234 as a full-text field, then add to query WHERE MATCH('example _user1234') AND user = 1234 then finds documents just from that user.
One possible solution is to group documents of the same user in inverted index block. Given that inverted index block is sorted by document id, such grouping can be done only by assigning ids to documents appropriately. Same user's documents should have monotonic ids. There could be minor violations of this rule - it would not harm performance significantly.
Implementations.
index sorting having just become a first-class citizen in Lucene 6.21
Could be achieved in elasticsearch 2.3 (see here). And I think it's achievable in Solr in the same way.
As for sphinx, I suppose the same technique of assigning monotonic document ids should work.
For more technical reasoning see previous link.

Resources