Painless script with Spring Data Elasticsearch - elasticsearch

We are using Spring Data Elasticsearch to build a 'fan out on read' user content feed. Our first attempt is currently showing content based on keyword matching and latest content using NativeSearchQueryBuilder.
We want to further improve the relevancy order of what is shown to the user based on additional factors (e.g. user engagement, what currently the user is working on etc).
Can this custom ordering be done using NativeSearchQueryBuilder or do we get more control using a painless script? If it's a painless script, can we call this from Spring Data ElasticSearch?
Any examples, recommendations would be most welcome.

Elasticsearch orders it result by it relevance-score (which marks a result relevancy to your search query), think that each document in the result set includes a number which signifies how relevant the document is to the given query.
If the data you want to change your ordering upon is part of your indexed data (document fields for example), you can use QueryDSL, to boost the _score field, few options I can think on:
boost a search query dependent on it criteria: a user searches for a 3x room flat but 4x room in same price would be much better match, then we can: { "range": { "rooms": { "gte": 4, "boost": 1 }}}
field-value-factor you can favor results by it field value: more 'clicks' by users, more 'likes', etc..,
random-score if you want randomness in your results: different
result every time a user refreshes your page or you can mix with existing scoring.
decay functions (Gauss!) to boost/unboost results that are close/far to our central point. lets say we want to search apartments and our budget is set to 1700. { "gauss": { "price": { "origin": "1700", "scale": "300" } } } will give us a feeling on how close we are to our budget of 1,700. any flat with much higher prices (let's say 2,300) - would get much more penalized by the gauss function - as it is far from our origin. the decay and the behavior of gauss function - will separate our results accordingly to our origin.
I don't think this has any abstraction on spring-data-es and I would use FunctionScoreQueryBuilder with the NativeSearchQueryBuilder.

Related

Elasticsearch query for wikipedia pages

I have indexed all wikipedia pages on elasticsearch, and now I would like to search through them according to a list of keywords that I have created. The documents on elasticsearch have only three fields: id for the page id, title for the page title and content for the page content (already clean of wikipedia markup).
My goal is to reproduce the mediawiki query api as much as possible, with parameters action=query and list=search. For instance, given the keywords "non riemannian metric spaces", a call to
https://en.wikipedia.org/w/api.php?action=query&list=search&format=json&srlimit=10&srprop=&srsearch=non%20riemannian%20metric%20spaces
gives a list of the most relevant pages for those keywords.
So far I have been using rather simple elasticsearch search queries, like for instance
POST _search
{
"query": {
"bool" : {
"must" : {
"match" : {
"content": {
"query": "non riemannian metric spaces"
}
}
},
"should" : {
"match" : {
"title": {
"query": "non riemannian metric spaces",
"boost": x
}
}
}
}
}
}
for several values of boost, like 1, 2 or 0.5. This gives already some decent results, in the sense that the pages I obtain are relevant to the keywords, but sometimes they are not quite the same I get with the mediawiki api.
I would be glad to hear some suggestions on how to fine-tune the elasticsearch query to mimic more accurately the mediawiki api behavior. Or even, since the mediawiki api itself is built with elasticsearch and cirrussearch, I would like to know whether the actual elasticsearch query for the entry point above with those specific parameters is openly available.
Thank you in advance!
UPDATE (after Robis Koopmans' answer): Seeing the actual query with cirrusDumpQuery has indeed been very useful. I do however have some followup questions concerning the query:
The query has a set of similar multi_match clauses searching my keywords in fields like ["title.plain^1", "title^3"]. While I understand the ^n boost, I ignore what .plain refers to. Does it have to do with elasticsearch itself (i.e. is it a field derived from title at index time?) or is it something that has to do with the specific mediawiki mapping they use? In any case, I would appreciate some more information about this.
At some other point in the query, there is a {"match": {"all": {...}}} clause. What exactly is the all key here? Is it a document field? Is it related with the match_all clause?
What is the suggest field that appears in the query? In the score explanation it seems to be associated with synonyms. How are those handled in this case?
To be performed after the search, there is a rescore clause with two other score functions. One of them uses the popularity_score of a wikipedia page. What is that?
And finally, the most relevant score that ends up ranking the pages is the output of the sltr clause. In it, there is a "model": "enwiki-20220421-20180215-query_explorer", and in the score explanation it is identified with a LtrModel: naive_additive_decision_tree. I understand that this model is some stored LTR model. However, since it seems to be the most relevant number in the final sorting of the results, what exactly is that model and is it openly available?
Please feel free to answer whichever question you know the answer to, and again thanks a lot!
The query has a set of similar multi_match clauses searching my keywords in fields like ["title.plain^1", "title^3"]. While I understand the ^n boost, I ignore what .plain refers to. Does it have to do with elasticsearch itself (i.e. is it a field derived from title at index time?) or is it something that has to do with the specific mediawiki mapping they use? In any case, I would appreciate some more information about this.
The .plain fields are generated as part of the elasticsearch mapping. The current settings and mappings are available to see how exactly they work. mediawiki.org includes a search glossary entry on the plain field as well. In general the top level field contains a highly processed form of the text, and the plain field uses minimal analysis.
At some other point in the query, there is a {"match": {"all": {...}}} clause. What exactly is the all key here? Is it a document field? Is it related with the match_all clause?
mediawiki.org also contains an (incomplete) CirrusSearch schema that gives a brief description of these fields and the various analysis chain components used. The all field is an optimization to give a strong first-pass filter against the search index.
What is the suggest field that appears in the query? In the score explanation it seems to be associated with synonyms. How are those handled in this case?
Suggest field contains shingles (word ngrams) of the articles title and redirects, essentially a pre-calculation of phrase queries. The suggest might look like synonyms in the explain output, and they often contain those, but it also includes misspellings, translations, and numerous other reasons editors have for creating redirects. Matches on redirects are generally a strong relevance signal.
To be performed after the search, there is a rescore clause with two other score functions. One of them uses the popularity_score of a wikipedia page. What is that?
This is the fraction of page views on the wiki that go to that article.
And finally, the most relevant score that ends up ranking the pages is the output of the sltr clause. In it, there is a "model": "enwiki-20220421-20180215-query_explorer", and in the score explanation it is identified with a LtrModel: naive_additive_decision_tree. I understand that this model is some stored LTR model. However, since it seems to be the most relevant number in the final sorting of the results, what exactly is that model and is it openly available?
This model is generated by mjolnir and essentially overwrites the score from the rest of the query. There is some information in wikitech (found there as it is more specific to the WMF deployment of mediawiki than mediawiki itself), also a slide deck called From Clicks to Models might give some insight into whats happening in that code base. Perhaps important to know mjolnir only applies to bag of words queries, queries invoking phrases or other expert functionality skip the ML model.
Noone had asked for the models before, if they might be useful i dumped the current models from the ranking plugin. This contains both the feature definitions used and the decision trees generated by xgboost.
I didn't find an excuse to link it above, but maybe the draft page at CirrusSearch/Scoring that mentions some of the factors that go into retrieval and scoring, particularly for queries that can't be run through mjolnir models, might help as well.
You can add cirrusDumpQuery to your query
example:
https://en.wikipedia.org/w/index.php?title=Special:Search&cirrusDumpQuery=&search=cat+dog+chicken&ns0=1
more information:
https://www.mediawiki.org/wiki/Extension:CirrusSearch#API
You can't make Elasticsearch queries to Wikipedia directly, but CirrusSearch can generate many types of queries beyond fulltext search. It's not clear from your question exactly what type of query you are looking for, but it might be worth to look at sorting options, if you prefer to weight results by text similarity only, and not things like page views.

elasticsearch scoring on multiple indexes

i have an index for any quarter of a year ("index-2015.1","index-2015.2"... )
i have around 30 million documents on each index.
a document has a text field ('title')
my document sorting method is (1)_score (2)created date
the problem is:
when searching for some text on on 'title' field for all indexes ("index-201*"), always the first results is from one index.
lets say if i am searching for 'title=home' and i have 10k documents on "index-2015.1" with title=home and 10k documents on "index-2015.2" with title=home then the first results are all documents from "index-2015.1" (and not from "index-2015.2", or mixed) even that on "index-2015.2" there are documents with "created date" higher then in "index-2015.1".
is there a reason for this?
The reason is probably, that the scores are specific to the index. So if you really have multiple indices, the result score of the documents will be calculated (slightly) different for each index.
Simply put, among other things, the score of a matching document is dependent on the query terms and their occurrences in the index. The score is calculated in regard to the index (actually, by default even to each separate shard). There are some normalizations elasticsearch does, but I don't know the details of those.
I'm not really able to explain it well, but here's the article about scoring. I think you want to read at least the part about TF/IDF. Which I think, should explain why you get different scores.
https://www.elastic.co/guide/en/elasticsearch/guide/current/scoring-theory.html
EDIT:
So, after testing it a bit on my machine, it seems possible to use another search_type, to achieve a score suitable for your case.
POST /index1,index2/_search?search_type=dfs_query_then_fetch
{
"query" : {
"match": {
"title": "home"
}
}
}
The important part is search_type=dfs_query_then_fetch. If you are programming java or something similar, there should be a way to specify it in the request. For details about the search_types, refer to the documentation.
Basically it will first collect the term-frequencies on all affected shards (+ indexes). Therefore the score should be generalized over all these.
according to Andrei Stefan and Slomo, index boosting solve my problem:
body={
"indices_boost" : { "index-2015.4" : 1.4, "index-2015.3" : 1.3,"index-2015.2" : 1.2 ,"index-2015.1" : 1.1 }
}
EDIT:
using search_type=dfs_query_then_fetch (as Slomo described) will solve the problem in better way (depend what is your business model...)

Elasticsearch: Accessing nested document attributes in script

I store log data in elasticsearch and my records, among other data, contain lists of values. First I represented these lists of values with regular arrays in elastic, but soon realised that the flattening in combination with the inverted index in Lucene made average aggregations on a list such as [1,1,1,1,5] came out completely wrong since the inverted index only contained [1,5]. Clearly avg([1,5]) is different from avg([1,1,1,1,5]).
Seeking out solutions I turned to nested documents, which do not flatten the data.
I now have my nested documents in elasticsearch looking something in the line of:
"nested_documents": [
{ "list1": 1, "list2": 2},
{ "list1": 3, "list2": 4}
]
Using the nested aggregation I am able to do aggregations such as:
"aggs": {
"nested_aggregation": {
"nested": {
"path": "nested_documents"
},
"aggs": {
"average_of_list1": {
"avg": {
"field": "nested_documents.list1"
}
}
}
}
Which now give me the correct result over the entire data set. However, I do have another requirements as well.
I would like to achieve things like max(avg(nested_documents.list1)), i.e. I want to have the average value of a field of my nested documents. I imagined I could use a script to achieve this, but I can't find a way to access the nested document in scripts. I did achieve the desired result using script and _source, but this was way too slow to be used in production on my datasets.
The only simple (and fast) solution I can imagine is to calculate the averages before storage, and store them along the actual lists, but that doesn't feel right.
Aggregating over aggregation results are not yet supported in elasticsearch. Apparently there is a concept called reducers that are being developed for 2.0. I would suggest having a look at scripted metric aggregations. Basically, you can create your own aggregation by controlling the collection and computation aspects yourself using scripts.
Have a look at the following question for an example of this aggregation: Elasticsearch: Possible to process aggregation results?

How to structure Elasticsearch indices/types?

How would you structure indices/types for an eshop application? Such an eshop would consist of domain objects like product, category, tag, manufacturer etc. The fulltext search results page should display intermixed list of all domain objects.
I can think of two options:
One index per whole application, every domain object as a type.
Every domain object has its own index, the type is the same - "item".
Which option will scale better?
The most of the "items" in the database are products. Some products aren't yet/anymore available. How to boost currently available products?
The fulltext should prefer to show categories/manufacturers on top of the page. How to boost certain types / objects from certain index?
For better performance i suggest first option is better one.
1)"One index per whole application, every domain object as a type."
2)Consider you create an index named "eshop".And types such as mobile,book etc
3)Because you can query according to your user input.Consider you create a shopping website like flipkart.In search user can search with plain keyword.
4)Now you can search in Elasticsearch with only mentioning index name.If user refer sum filter like mobile,range 1000-10000.you need to search inside mobile type,moreover we can easily filter in Elasticsearch.it will reduce your execution memory and CPU.
To boost available products.Add a field called "available" in your document.And while searching mentions boost value for available product.Example:
{
"query": {
"term": {
"available": true
}
}
"boost": 1.5
}
For more Boosting refer
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-boosting-query.html
http://jontai.me/blog/2013/01/advanced-scoring-in-elasticsearch/

How to retrieve unique count of a field using Kibana + Elastic Search

Is it possible to query for a distinct/unique count of a field using Kibana? I am using elastic search as my backend to Kibana.
If so, what is the syntax of the query? Heres a link to the Kibana interface I would like to make my query: http://demo.kibana.org/#/dashboard
I am parsing nginx access logs with logstash and storing the data into elastic search. Then, I use Kibana to run queries and visualize my data in charts. Specifically, I want to know the count of unique IP addresses for a specific time frame using Kibana.
For Kibana 4 go to this answer
This is easy to do with a terms panel:
If you want to select the count of distinct IP that are in your logs, you should specify in the field clientip, you should put a big enough number in length (otherwise, it will join different IP under the same group) and specify in the style table. After adding the panel, you will have a table with IP, and the count of that IP:
Now Kibana 4 allows you to use aggregations. Apart from building a panel like the one that was explained in this answer for Kibana 3, now we can see the number of unique IPs in different periods, that was (IMO) what the OP wanted at the first place.
To build a dashboard like this you should go to Visualize -> Select your Index -> Select a Vertical Bar chart and then in the visualize panel:
In the Y axis we want the unique count of IPs (select the field where you stored the IP) and in the X axis we want a date histogram with our timefield.
After pressing the Apply button, we should have a graph that shows the unique count of IP distributed on time. We can change the time interval on the X axis to see the unique IPs hourly/daily...
Just take into account that the unique counts are approximate. For more information check also this answer.
Be aware with Unique count you are using 'cardinality' metric, which does not always guarantee exact unique count. :-)
the cardinality metric is an approximate algorithm. It is based on the
HyperLogLog++ (HLL) algorithm. HLL works by hashing your input and
using the bits from the hash to make probabilistic estimations on the
cardinality.
Depending on amount of data I can get differences of 700+ entries missing in a 300k dataset via Unique Count in Elastic which are otherwise really unique.
Read more here: https://www.elastic.co/guide/en/elasticsearch/guide/current/cardinality.html
Create "topN" query on "clientip" and then histogram with count on "clientip" and set "topN" query as source. Then you will see count of different ips per time.
Unique counts of field values are achieved by using facets. See ES documentation for the full story, but the gist is that you will create a query and then ask ES to prepare facets on the results for counting values found in fields. It's up to you to customize the fields used and even describe how you want the values returned. The most basic of facet types is just to group by terms, which would be like an IP address above. You can get pretty complex with these, even requiring a query within your facet!
{
"query": {
"match_all": {}
},
"facets": {
"terms": {
"field": "ip_address"
}
}
}
Using Aggs u can easily do that.
Writing down query for now.
GET index/_search
{
"size":0,
"aggs": {
"source": {
"terms": {
"field": "field",
"size": 100000
}
}
}
}
This would return the different values of field with there doc counts.
For Kibana 7.x, Unique Count is available in most visualizations.
For example, in Lens:
In aggregation based visualizations:
And even in TSVB (supporting normal fields as well as Runtime Fields, Scripted Fields are not supported):

Resources