How does calculating relevance scoring in Elasticsearch differ from Couchbase? - elasticsearch

I wonder whether relevance score in elasticsearch has differences with couchbase or not?

As per this 2019 couchbase thread, it looks like they are still using the tf/idf for scoring, while Elasticsearch used to have the same algorithm but now moved to BM25 algorithm for score calculation from 5.0.
Note: TF/IDF is a very popular algorism for calculating the relevance score and based on term frequency and inverse document frequency, while BM25 is the latest and improvised form based on probabilistic scoring more details about them can be found here and here.
Note: As in the question, it's not mentioned for what purpose you are comparing both the relevance of the system, My two cents are if you are building a full-blown search system and relevance matters for you, then you should choose Elasticseaech whose primary function is to search and has a lot of flexibility in choosing different algorithm and different ways to define the scoring mechanism, which is not present in NoSQL solution like Couchbase.

Related

Is there a way to show at what percentage a selected document is similar to others on ElasticSearch?

I need to build a search engine using Elasticsearch and the steps will be as following:
Search on the search engine with a search string.
The relevant results will display and I can click on these documents.
If I select a document, I will be redirected to another page where I will see all the details of the documents and will have an option "More Like This" (which will return documents similar to the selected document). I know that this is done using the MLT query.
Now my question is: Except for returning documents similar to the selected one, how can I also return at what percentage the documents are similar to the selected one?
There are a couple of things you can do.
using function_score query
more_like_this query is essentially a full text search, and it returns documents ordered by their relevance score. It could be possible to convert the score directly to a percentage, but it is not advised (here
and more specifically here).
Instead one can define a custom score with help of a function_score query, which can be designed so it returns a meaningful percentage.
This, of course, comes with additional cost of complexity, and the definition of "similarity" becomes more of an art than of science.
using dense_vector
One may opt to use the (yet experimental) dense_vector data type, which allows storing and comparing dense vectors (that is, arrays of numbers of fixed size). Here's an article that describes this approach very well: Text similarity search with vector fields.
In this case the definition of similarity is as precise as it can possibly be: a distance of two vectors in a multidimensional space, which can be computed via, for instance, cosine similarity.
However, such dense vectors have to be somehow computed, and the quality of said vectors will equal the quality of the similarity itself.
As the bottom line I must say that to make this work with Elasticsearch a bunch of computation and logic should be added outside, either in form of pre-computed models, or custom curated scoring algorithms. Elasticsearch out of the box does not seem to be a good percentage-similarity kind of deal.
Hope that helps!
If you're going the route of using semantic search via dense_vector, as Nikolay mentioned, I would recommend NBoost. NBoost has a good out-of-the-box systems for improving Elasticsearch results with SOTA models.

How cosine similarity differs from Okapi BM25?

I'm conducting a research using elasticsearch. I was planning to use cosine similarity but I noted that it is unavailable and instead we have BM25 as default scoring function.
Is there a reason for that? Is cosine similarity improper for querying documents? Why was BM25 chosen as default?
Thanks
Longtime elasticsearch use TF/IDF algorithm to find similarity in queries. But number versions ago is changed to BM25 as more efficient. You can read the information in the documentation. And good article explains what is elastic search and how to the similarity in ES.
You can also write a custom algorithm to elasticsearch. Here a good article about how to do.

Does BM25 use query coordinator?

In Lucene's practical scoring function there is a query coordinator which punishes documents that fail to match all the query terms. does Okapi BM25 use the same trick?
The reason I'm curious about it is that I'm using Elasticsearch with BM25 similarity module and sometimes I feel this algorithm does not favor documents with more matches. There are cases that a document contains one or two terms a lot, outscores a document containing all query terms.
Yes and no.
No, it doesn't use a coord factor as described by the old Lucene default similarity (note: Lucene core now uses BM25 by default, as well).
Yes, it does weigh hits on more of the query terms more heavily than a bunch of hits on the same term. It does this with better term saturation, making the old coord factor effectively obsolete.
It is, however, always possible that many hits on less terms will outscore few hits on more terms using either algorithm.

How do normalization and internal optimization of boosting work? And how does that affect the relevance?

I'm new to elastic search. I'm having trouble understanding the calibration and scaling of boost values for fields in a document. As in how should we decide the boosting values for field so that it works as expected. I've gone through some of the online blogs and es doc as well, it's written that es does normalization and internal optimization of boosting values? How does that work?
E.g.: If we have tags, title, name and text fields in our doc, how should we decide the boosting values for these?
Elasticsearch uses a boolean model to match documents, and then a scoring model to determine relevance (i.e. ranking). The scoring model utilizes a TF/IDF score, coupled with some additional features. Those TF/IDF scores are calculated for each matching field within a query, and then aggregated to produce an overall score for a document. To dig into this process, I suggest running explain on your query to see how the score of each field is influencing the overall relevance of your document.
As the expert on your data, you're in the best position to determine which fields should most heavily influence the relevance of your document. Finding the right boost value for a field is about adjusting the levers until you find a formula that best suites your desired outcome (Also, if you have users, A/B testing can help here).

Simple explanation of different ElasticSearch similarity algorithms

I am looking into the different similarity algorithms which define how the score of each document is computed during search. The available algorithms are listed here: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-similarity.html
My problem is that I have problems to understand them when digging through the wikipedia articles or the class descriptions in the lucene API documentation. I really like the answer about explaining the TF/IDF similarity algorithm (the default in ElasticSearch) here: What is the reasoning behind the ranking of this ElasticSearch query? (so this one I understand to a certain amount).
Can somebody provide similiar simple explanations to the other algorithms outlined there? These include:
bm25 similarity
drf similarity
ib similarity
Thank you in advance.
The problem you run into here, is by the description set forward in the linked answer, Lucene's default similarity, and bm25 are fundamentally identical, in that they both factor in:
more occurrences in the document are preferred
terms rarer in the corpus are preferred
shorter documents are more heavily weighted
other functions used to adjust score, boosts, etc.
dfr actually encompasses 7 different base-models alone, each using a different scoring algorithm, followed by two highly configurable normalization steps. A number of configuration options fit the very general steps above, some diverge from it.
Similarly, ib allows some significant configuration as well, but generally hits the same high points, of favoring higher term frequency, favoring matches on terms that are more rare (by some description), and adjusting for document length, boosts, and other possible normalizations.

Resources