Solr defining custom sort order - sorting

I'm using Solr for offering faceted navigation to my site. However I need to define the custom sort order of facets and for the results.
Here is an example what I need to do.
Current Facets
Activities
Jogging (32)
Skipping (8)
Hiking (6)
Swimming (6)
Skiing (5)
etc
Requirement 1:
What I need to do is, define the custom sort order in the query, so that when user lands on the search results page then it will show all the results, but the results with Jogging tag will always load first, then skipping and so on.
Requirement 2:
If I am able to achieve 1, then there is more to it, from 2nd facet onward, I only need to load 5 results and the rest should get appended towards the tail of result set. For e.g. - Skipping will load just top 5 skipping and rest 3 will be added to towards the tail of entire result set.
Requirement 2 is good to have and I can push back on that requirement if its not achievable purely through solr query, but requirement 1 is a must.
I am pretty new to Solr right now and need to know the best way to handle it. Any help or direction will be highly appreciated.
Thanks
- RT

Related

Elastic search pagination starting with a particular document

I am using Elastic search to show a paginated list of products in a grid view in a mobile app. Now the user can scroll through the list and click on any product to view the details.
Now the detail view also supports scrolling through the products via swipe left and right. So for the detail view, I want to fetch paginated results from elastic search starting from a particular product.
For now I am calculating the index of the product in list view and then doing the math to fetch that particular page and scroll to the index.
Is there a better way to do this?
You can use the scroll api to get paginated results for your search query.
Alternatively, you can use the search_after api which seems to perform better for large number of results but it is available only for 7.x+ elastic versions.
I'm not totally certain — I haven't tested it myself — but I think the suggestion of using the search after API is the way to go.
You need to use something like the point in time feature, which is what the search after uses. Without it, you have no guarantee that the data in the database aren't changing. If the data change, then your search result may change. If that changes, then what comes "next" also changes, and that may no longer correspond to what you want.
E.g., if you currently have 10 search results and your item of interest is at index 5, if someone adds a document that moves your point of interest to index 6, then naïvely asking for the next item would return the same thing!
The point in time feature creates a snapshot of the database at a moment in time, so you don't have to worry about new or modified documents messing things up.
As an aside, using the point in time feature at scale is probably (again, making educated guesses here) not a very good idea. Elasticsearch has to keep a mini-snapshot of the whole database (!) every time you call that for the duration.
You're probably better off limiting the number of items people can page through to something large but manageable, and then reloading a new page if someone gets to the end. If you pull 500 products initially and someone gets to the end (which seems unlikely to me a priori), you could re-issue the search paged forward 500 items, deduplicate at the boundary and no one will be the wiser.

Elastic Search - Scroll behavior

I come across at least two possible ways to fetch the results in batches .
Scroll API
Pagination - From , Size parameters
What is the fundamental difference ? I am assuming #1 allows to scroll over the records while #2 allows you to fetch a batch of records at a time . If i just use different From , Size parameters to drive pagination, are there chances where the same record will be returned in different batches?
Using from/size is the default and easiest way to paginate results. By default, it only works up to a size of 10000. You can increase that limit, but it is not advised to go too far because deep pagination will decrease the performance of your cluster.
The scroll API will allow you to paginate over all your data. The way it works is by creating a search context (i.e. a snapshot of the data at the time your start scrolling) and then you'll get a cursor to paginate over all your data. When done, you can close the search context. The created search context has an associated cost (requires state, hence memory), hence this way of paginating is not suited to real-time pagination (more for batch-like pagination).
There is another way of scrolling over all the data without the additional cost of creating a dedicated search context every time, and it's called search_after. In this flavor, the idea is to sort your data, and then use the sort values as lightweight cursors. It can have some drawbacks, for instance, if you're constantly indexing new data, you might run the risk of missing new data that would have appeared on a previous "page".
In 7.10, there is going to be yet another way of paginating data, which is called Point in Time search (PIT). Here the idea is again to create a context so that you can return hits as rapidly as possible and aggregations (a bit later) in two distinct calls.
UPDATE
7.10 got released on Nov 11th, 2020, and Point in Time searches are now available, too.

How to handle pagination when the source data changes frequently

Specifically, I'm using Elasticsearch to do pagination, but this question could apply to any database.
Elasticsearch provides methods to paginate search results with handy from and to parameters.
So I run a query get me the most recent data from result 1 to 10
This works great.
The user clicks "next page" and the query is:
get me the most recent data from result 11 to 20
The problem is that in the time between the two queries, 2 new records have been added to the backing database, which means the paginated results will overlap (the last 2 from the first page show up as first two on the second page).
What's the best solution to avoid this? Right now, I'm adding a filter to the query that tell it to only include results later than the last result of the previous query. But it just seems hackish.
A filter is not a bad option, if you're already indexing a relevant timestamp. You have to track that timestamp on the client side in order to correctly prepare your queries. You also have to know when to get rid of it. But those aren't insurmountable problems.
The Scroll API is a solid option for this, because it effectively snapshots in time on the Elasticsearch side. The intent of the Scroll API is to provide a stable search query for deep pagination, which has to deal with the exact issue of change that you're experiencing.
You begin a Scrolling Search by supplying your query and the scroll parameter, for which Elasticsearch returns a scroll_id. You then make requests to /_search/scroll supplying that ID, each of which return a page of results and a new scroll_id for the next request.
(Note that you don't want the scan search type here. That's used to extract documents en masse, and does not apply any sorting.)
Compared to filtering, you do still have to track a value: the scroll_id for your next page of results. Whether that's easier than tracking a timestamp depends on your app.
There are other potential downsides to consider. Elasticsearch persists the context for your search on a single node within the cluster. Conceivably these could accumulate in your cluster, depending on how heavily you rely on scrolling search. You'll want to test the performance implications there. And if I recall correctly, scrolling searches also do not persist through a node failure or restart.
The ES documentation for the Scroll API provides good details on all of the above.
Bottom line: filtering by timestamp is actually not a bad choice. The Scroll API is another valid option, designed for a similar use case, but is not without its drawbacks.
Realise this is a bit old but with ElasticSearch 6.3 there's now the search_after feature for the request body which allows for cursor type paging:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-search-after.html
It is very similar to the scroll API but unlike it, the search_after parameter is stateless, it is always resolved against the latest version of the searcher.
You need to use scan API for this. Scan and scroll API let's you do point in time search and pagination.
Scan API -

ElasticSearch documentation says not to use scroll for user requests, only for data transformation

I'm new to ES and confused by its documentation of scroll. From the docs "Scrolling is not intended for real time user requests, but rather for processing large amounts of data, e.g. in order to reindex the contents of of one index into a new index with a different configuration".
And yet...further down on the same page it says not to use from() and size() to do pagination because it "is very inefficient". And on the Java API page describing Search it shows an example of paging via Scroll.
So, assuming I want to present sorted search results, a page at a time, which approach is recommended: from/size or Scrolling?
from/size is very inefficient when you want to do deep pagination or if you want to request lots of results by page.
The reason is that results are sorted first on each shard, and all those results are then gathered, merged and sorted by the request coordinator node. This become more and more costly as the pages grow either in size or in rank. You will find a very good example documented here.
You could limit the size of your users' queries (e.g. to something like ~1000 results), and you will be fine using from/size.
If it's not an option, you can still use scroll, but you will lose some features like aggregations and keeping the search context alive has a cost.
You can use search_after. The basic process flow will be like this:
Perform your regular search to return an array of sorted document results by date.
Perform the next query with the search_after field in the body to tell Elasticsearch to only return documents after the specified document (date).
This way, your results remain robust against any updates or document deletions and stay accurate. You also avoid the scrolling costs (as you've likely already read) and the from/size method's linear time operation cost for each query starting from your initial document result.
See the docs for more info and implementation details.
Both scroll and from/size suffer from deep pagination. You could try a hybrid approach by doing pagination in larger steps (e.g. 100 entries at a time), but have the UI show in smaller batches (i.e. 10 only). As the user continues to go to the pages, at some point, you should trigger another background search task for the next batch while the user is occupied. If you track these sessions and get a rough idea on how deep users search, you could find your ideal resultset size and scroll in those number of steps.
Between the two, I had better experience with scrolling than from/size in terms of response times, but YMMV. Comes down to your data, shard setup, etc.
There's a decent article about pagination here. The cliff notes seem to be:
if you're presenting results to a user for application search: use from/size (this technique is preferable up to 10,000 results)
if you're infinite scrolling, use search_after- this is more efficient and can be used with > 10,000 results.
if you have regular inserts on your index, search_after is yet more preferable, because it should avoid duplicates arising from an insert on page 1 pushing results onto page 2.
if you need users to be able to "go back" (from page 2 to page 1) for example, and see consistent results, you need a technique which freezes the results. This could be either:
Point in Time API: ES > 7.10 X-Pack feature
Scroll API: Older, free-er versions of ES
The article merits a read if you've got this far. Bonus link to the es pagination docs.

SolrCloud: workaround for classic pagination with "start,rows" parameters

I have SolrCloud with 3 shards.
My purpose: select and process all products from category.
Current implementation: Portion selection in cycle.
1st iteration: q=cat:1&start=0&rows=100
2nd iteration: q=cat:1&start=100&rows=100
3th: q=cat:1&start=200&rows=100
...
But growing "start", performance is down. Explanation here: https://wiki.apache.org/solr/DistributedSearch
Makes it more inefficient to use a high "start" parameter. For
example, if you request start=500000&rows=25 on an index with 500,000+
docs per shard, this will currently result in 500,000 records getting
sent over the network from the shard to the coordinating Solr
instance. If you had a single-shard index, in contrast, only 25
records would ever get sent over the network. (Granted, setting start
this high is not something many people need to do.)
What ideas how I can walk around all records in category?
There is another way to do more effective pagination in Solr - Cursors - which uses the current place in the sort instead. This is particularly useful for deep pagination.
See the section about Cursors at the Pagination of Results wiki page. This should speed up delivery as the Server should be able to do a sort of its local documents, decide where it is in that sequence and return 25 documents after that document.
UPDATE: Also useful link coming-soon-to-solr-efficient-cursor-based-iteration-of-large-result-sets
I think the short answer is "no" - it's a limitation of how Solr does sharding. Instead, can you amass a list of document unique keys outside of Solr - presumably from a backing database - and then retrieve from the index using sets of those keys instead?
e.g. ID:(1 OR 2 OR 3 OR ...very long list...)
Or, if the unique keys are numeric you could use a moving range instead:
ID:[1 TO 1000] then ID:[1001 TO 2000] and so forth.
In both options above you'd also restrict by category as well. They both should avoid the slow down associated with windowing however.

Resources