Elasticsearch - query primary and secondary attribute with different terms - elasticsearch

I'm using elasticsearch to query data that originally was exported out of several relational databases that had a lot of redundencies. I now want to perform queries where I have a primary attribute and one or more secondary attributes that should match. I tried using a bool query with a must term and a should term, but that doesn't seem to work for my case, which may look like this:
Example:
I have a document with fullname and street name of a user and I want to search for similiar users in different indices. So the best match for my query should be the best match on fullname and best match on streetname field. But since the original data has a lot of redundencies and inconsistencies the field fullname (which I manually created out of fields name1, name2, name3) may contain the same name multiple times and it seems that elasticsearch ranks a double match in a must field higher than a match in a should attribute.
That means, I want to query for John Doe Back Street with the following sample data:
{
"fullname" : "John Doe John and Jane",
"street" : "Main Street"
}
{
"fullname" : "John Doe",
"street" : "Back Street"
}
Long story short, I want to query for a main attribute fullname - John Doe and secondary attribute street - Back Street and want the second document to be the best match and not the first because it contains John multiple times.

Manipulation of relevance in Elasticsearch is not the easiest part. Score calculation is based on three main parts:
Term frequency
Inverse document frequency
Field-length norm
Shortly:
the often the term occurs in field, the MORE relevant is
the often the term occurs in entire index, the LESS relevant is
the longer the term is, the MORE relevant is
I recommend you to read below materials:
What Is Relevance?
Theory Behind Relevance Scoring
Controlling Relevance and subpages
If in general, in your case, result of fullname is more important than from street you can boost importance of the first one. Below you have example code base on my working code:
{
"query": {
"multi_match": {
"query": "john doe",
"fields": [
"fullname^10",
"street"
]
}
}
}
In this example result from fullname is ten times (^10) much important than result from street. You can try to manipulate the boost or use other ways to control relevance but as I mentioned at the beginning - it is not the easiest way and everything depends on your particular situation. Mostly because of "inverse document frequency" part which considers terms from entire index - each next added document to index will probably change the score of the same search query.
I know that I did not answer directly but I hope to helped you to understand how this works.

Related

prevent elasticsearch from matching target phrase multiple times in document

I am an Elastic Search newbie.
How can one make elastic search rank documents that more precisely match the input string?
For example, suppose we have the query
{
"query": {
"match": {
"name": "jones"
}
}
}
Suppose we have two documents:
Doc1: "name" : "jones"
Doc2: "name" : "jones jones jones jones jones"
I want Doc1 to be ranked more highly? It is a more precise match. How can I do this?
(Hopefully, in the most general possible way -- e.g. what if everywhere above 'jones' were replaced with 'fred jones')
Perhaps there are two approaches:
Maybe you can tell ES, "hey for this query a high term frequency should not be rewarded" (which seems to go against the core of ES, TF-IDF ...Because it very strongly wants to rewards a high TF (term frequency).
Maybe you can tell ES "prefer shorter matches over longer ones" (maybe using script_score???)
Surprised that I can't find answers to this question elsewhere. I must be missing something very fundamental.

How to take (length of the aliases field) out of score calculation

Suppose we have a documents of people with their name and array of aliases like this:
{
name: "Christian",
aliases: ["נוצרי", "کریستیان" ]
}
Suppose I have a document with 10 aliases and another one with 2 aliases
but both of them contains alias with value کریستیان.
The length of field (dl) for the first document is bigger than the second document
so the term frequency (tf) of the first document gets lower than the second one. eventually the score of the document with less aliases is bigger than another.
Sometimes I want to add more aliases for person in different languages and different forms because he/she is more famous but it causes to get lower score in results. I want to somehow take length of the aliases field out of my query's calculation.
Norms
store the relative length of the field.
How long is the field? The shorter the field, the higher the weight.
If a term appears in a short field, such as a title field, it is more
likely that the content of that field is about the term than if the
same term appears in a much bigger body field.
Norms can be disabled using PUT mapping api
PUT my_index/_mapping
{
"properties": {
"title": {
"type": "text",
"norms": false
}
}
}
Links for further study
https://www.elastic.co/guide/en/elasticsearch/guide/current/scoring-theory.html#field-norm

tf/idf boosting within field

My use case is like this:
for a query iphone charger, I am getting higher relevance for results, having name, iphone charger coupons than with name iphone charger, possibly because of better match in description and other fields. Boosting name field isn't helping much unless I skew the importance drastically. what I really need is tf/idf boost within name field
to quote elasticsearch blog:
the frequency of a term in a field is offset by the length of the field. However, the practical scoring function treats all fields in the same way. It will treat all title fields (because they are short) as more important than all body fields (because they are long).
I need to boost this more important value for a particular field. Can we do this with function score or any other way?
A one term difference in length is not much of a difference to the scoring algorithm (and, in fact, can vanish entirely due to imprecision on the length norm). If there are hits on other fields, you have a lot of scoring elements to fight against.
A dis_max would probably be a reasonable approach to this. Instead of all the additive scores and coords and such you are trying to overcome, it will simply select the score of the best matching subquery. If you boost the query against title, you can ensure matches there are strongly preferred.
You can then assign a "tie_breaker", so that the score against the description subquery is factored in only when "title" scores are tied.
{
"dis_max" : {
"tie_breaker" : 0.2,
"queries" : [
{
"terms" : {
"age" : ["iphone", "charger"],
"boost" : 10
}
},
{
"terms" : {
"description" : ["iphone", "charger"]
}
}
]
}
}
Another approach to this sort of thing, if you absolutely know when you have an exact match against the entire field, is to separately index an untokenized version of that field, and query that field as well. Any match against the untokenized version of the field will be an exact match again the entire field contents. This would prevent you needing to relying on the length norm to make that determination.

Unexpected case sensitivty

I am a noob running elastic search 1.5.9. I want to pull out all of the documents that have the field "PERSON" set to "Johnson." (Note the mixed casing). If I manually look at elastic search head, I can see a document with exactly those attributes.
The docs explain that I should construct a filter query to pull out this document. But when I do so, I get some unexpected behavior.
This works. It returns exactly one document w/ Person = "Johnson", as expected
query = {"filter": {"term" : { "PERSON" : "johnson" }}}
But this does not work
query = {"filter": {"term" : { "PERSON" : "Johnson" }}}
If you look closely, you'll see that the good query is lowercase but the bad query is mixed case -- even though the PERSON field is set to "Johnson".
Adding to the weirdness, I am lower casing everything that goes into the full_text field: "_source": { "full_text": "all lower case" So the full text includes johnson -- which I would think would be totally independent from the PERSON field.
What's going on? How do I do a mixed case search on the PERSON field?
Term query wont analyze your search text.
This means you need to analyzed and provide the query in token format for term query to actually work.
Use match query instead , things will work like magic.
So when a string like below goes to Elasticsearch , its tokenized ( or rather analyzed) and stored
"Green Apple" -> ( "green" , "apple")
This is the default behavior of analysis.
Now when you search using term query , the analysis wont happen.
Which means for the word Apple , it searches for the token Apple with case preserved. And hence fails.
For match query , it does do the analysis. Which means if you search with Apple , it converts it to apple and then does the search. Which give good matches.
You can learn more on analysis here.

one-to-many relationships in Elastic Search

Suppose I have 2 tables called "twitter_user" and "twitter_comments".
twitter_users has the fields: username and bio
twitter_comments has the fields: username and comment
Obviously, an user has 1 entry in twitter_users and potentially many in twitter_comments
I want to model both twitter_users and twitter_comments in Elastic Search, have ES search both models when I query, knowing that a comment counts towards the overall relevancy score for a twitter user.
I know I can mimic this with just 1 model, by creating a single extra field (in addition to username and bio) with all the comments concatenated. But is there another "cleaner" way?
It depends.
If you just want to be able to search for a users comments ,full-text and over all fields, simply store all comments within the user object (no need to concatenate anything):
{
"user" : {
"username" : "TestUser",
"bio" : "whatever",
"comments" : [
{
"title" : "First comment",
"text" : "My 1st comment"
},
{
"title" : "Second comment",
"text" : "My 2nd comment"
}
]
}
}
If you need per-comment-based queries you need to map the comments as nested (before submitting any data), so that every comment gets treated as a single item.
For your scoring, simply add another field "comment_count" and use this for your boost/scoring.
As Thorsten already suggested you can use nested query and it's a good approach.
Alternatively, you can index comments as children of users. Then you can can search users as you do now, search comments using top_children query to find all relevant to your search comments, and finally combine scores from both of them together using bool or dis_max queries.
Nested approach would be more efficient during search, but you will have to reindex the user and all comments every time an additional comment is added. With child/parent approach you will need to index only new comments, but search will be slower and it will require more memory.

Resources