RethinkDB: custom scoring (like Elasticsearch) - elasticsearch

I recently discovered RethinkDB, and find it's query language to be much simpler than Elasticsearch. The only use case I haven't been able to find a solution for is specifying how to score results based on the document's fields, like you can do in Elasticsearch (http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/script-score.html). Is there a way to score the query results in RethinkDB and return only the top-n results?

If you have a query like r.table('comments').filter(r.row('name').eq('tldr')), then you can do something like r.table('comments').filter(r.row('name').eq('tldr')).map({score: CALCULATE_SCORE(r.row), row: r.row}).orderBy('score').limit(n) to return the top n results. Note that his does work proportional to the number of results in the original query. If that's too expensive, you can do something similar with an index by writing r.table('comments').indexCreate('score', CALCULATE_SCORE(r.row)) and then writing r.table('comments').orderBy({index: 'score'}).limit(n).

Related

Elasticsearch multiple score fields

Maybe a dummy question: is it possible to have multiple score fields?
I use a custom score based on function_score query. This score is being displayed to the user to show, how much each document matches his/her preferences. So far so good.
But! The user should be able to filter the documents and (of course) sort them not only by the custom relevance (how much each document matches his/her preferences) but also by the common relevance - how much each document matches the filter criteria.
So my first idea was to place the score calculated by function_score query to a custom field but it does not seems to be supported.
Or am I completely wrong and I should use another approach?
I took a different approach - in case user applies some filter the I run the query without function_score percolation and use the score calculated by ES and sort by it. Then I take all IDs from the result page and run percolation query with these IDs to get the custom "matching score". It does not seems to cause noticeable slowdown.
Anyway, I welcome any feedback.

Boosting score in Elasticsearch based on aggregation result

I need to boost the score of my documents based on a particular value. That value can be obtained from aggregation query.
Currently I am using 2 queries to do it, would like to achieve it in a single query.
1-So basically first query gets me the highest no of occurrence of a particular chapter based on a simple term/match query.
2- Next step is once I get the highest occurring chapter will fire another query which would basically have the same above query with another term query added with a boost factor of 10.
Any input with this regards is welcomed, if we can accomplish this in one query. Thanks in advance.
Ashit

Kibana 4 - Why does my simple query return correct results when using .raw but not without?

I'm trying out Elasticsearch/Kibana 4 and while my simple query:
program.raw:"MYAPPLICATION" AND entityId.raw:"12345-67N"
will return the results I want - i.e. result posts having the program and entityId field and containing the queried terms straight off, as I want.
However, I guess the right way to query this would be:
program:"MYAPPLICATION" AND entityId:"12345-67N"
but that gives my correct results only regarding the program query, and then a lot of hits on terms containing N or n. The entityId-part seems to only query on N?. I'm confused, please explain this. I've read up on the Lucene query syntax and can't find anything explaining this.
The .raw fields are setup by logstash as "not_analyzed" fields in elasticsearch. As such, they are not split into tokens and can be used intact.
To elasticsearch, entityId really looks like ['12345', '67n'], which is why your query doesn't match.
Note that, in your example, program:myapplication should work (since there are no special characters). Lowercase is automatic, IIRC.

ElasticSearch / Tire & Keywords. Right way to match "or" for a keyword list?

I've got an Entity model (in Mongoid) that I'm trying to search on its keywords field which is an array. I want to do a query where I pass in an array of potential search terms, and any entity that matches any of the terms will pass.
I don't have this working well yet.
But, why I'm asking this question, is that it's more complex. I also DONT want to return any entities that have been marked as "do not return" which I do via a "ignore_project_ids" parameter.
So, when I query, I get 0 results. I was using Bonsai.io. But, I've moved this to my own EC2 instance to reduce complexity/variables on solving the problem.
So, what am I doing wrong? Here are the relevant bits of code.
https://gist.github.com/3405763
You want a terms query rather than a term query - a term query is only interested in equality, whereas a terms query requires that the field match any of the specified values.
Given that you don't seem to care about the query score (you're sorting by another attribute), you'll get faster queries by using a filtered query and expressing your conditions as filters

Elastic Search limit results

In MySQL I can do something like:
SELECT id FROM table WHERE field = 'foo' LIMIT 5
If the table has 10,000 rows, then this query is way way faster than if I left out the LIMIT part.
In ElasticSearch, I've got the following:
{
"query":{
"fuzzy_like_this_field":{
"body":{
"like_text":"REALLY LONG (snip) TEXT HERE",
"max_query_terms":1,
"min_similarity":0.95,
"ignore_tf":true
}
}
}
}
When I run this search, it takes a few seconds, whereas mysql can return results for the same query in far, far less time.
If I pass in the size parameter (set to 1), it successfully only returns 1 result, but the query itself isn't any faster than if I had set the size to unlimited and returned all the results. I suspect the query is being run in its entirety and only 1 result is being returned after the query is done processing. This means the "size" attribute is useless for my purposes.
Is there any way to have my search stop searching as soon as it finds a single record that matches the fuzzy search, rather than processing every record in the index before returning a response? Am I misunderstanding something more fundamental about this?
Thanks in advance.
You are correct the query is being ran entirely. Queries by default return data sorted by score, so your query is going to score each document. The docs state that the fuzzy query isn't going to scale well, so might want to consider other queries.
A limit filter might give you similar behavior to what your looking for.
A limit filter limits the number of documents (per shard) to execute
on
To replicate mysql field='foo' try using a term filter. You should use filters when you don't care about scoring, they are faster and cache-able.

Resources