I understand that Elasticsearch Scroll API is not intended for real-time user requests. But would it be bad if it's used for that? I have a requirement to implement paginated results (to be displayed on web frontend) and from/size approach is returning duplicates across pages. Presumably because I have a sharded setup (with no replicas at all). I've tried setting preferencebut it did not help.
Scroll API does not seem to have this issue, I'm wondering if it's really bad to use it for my use case?
Thanks
Results from a scrolling search reflect the state of the index at the time of the initial search request. Subsequent indexing or document changes only affect later search and scroll requests. it means that your pagination is based on the time you requested the search result, so you don't see new document or will see deleted in your result. Also Scroll API is not recommended by ES for deep pagination any more(ES 7.x). you can find more info on ElasticSearch documentation page: https://www.elastic.co/guide/en/elasticsearch/reference/7.x/scroll-api.html
On the question 'why you get duplicate results', I think this is caused by intermediate indexing. When doing independent search calls with pagination, each call runs independently (still using some caching). So if you ask the first 100, you get the first 100 at that time. When then asking x seconds later the 'next' 100, you get 100 - 199 at x seconds later. If meanwhile a new document got indexed which logically fits in the first 100, it will push the rest further. This way, your result 100 (first in the second results) might have been #99 in the first call. When then gluing them together in the UI, you see the same result twice.
Both scroll and search-after are designed to refer ES back to the original call, indicating it that you want to continue counting from that moment onwards.
I have not found a good explanation though why search_after is better than scroll.
I assume that scroll is optimized for the use case where you will go through the entire set anyway (so the pagination is to avoid overloading the client and the pipe between ES and client with too big chunks at once). While search_after is optimized for the use case where you are likely to only go a few pages far/deep (it is known that human users tend to stay on the first page with a quickly lowering frequency of going much further, because you would force your eyes to find something into overwhelming amounts of information). Implementing good filters in the user interface is the much better approach.
Related
In order to speed up searches on our website, I have created a small elastic search instance which keeps a copy of all of the "searchable" fields from our database. It holds only a couple million documents with an average size of about 1KB per document. Currently (in development) we have just 2 nodes, but will probably want more in production.
Our application is a "primarily read" application - maybe 1000 documents/day get updated, but they get read and searched 10's of thousands of times/day.
Each document represents a case in a ticketing system, and the case may change status during the day as users research and close cases. If a researcher closes a case and then immediately refreshes his queue of open work, we expect the case to disappear from their queue, which is driven by a query to our Elastic Search instance, filtering by status. The status is a field in the case index.
The complaint we're getting is that when a researcher closes a case, upon immediate refresh of his queue, the case still comes back when filtering on "in progress" cases. If he refreshes the view a second or two later, it's gone.
In an effort to work around this, I added refresh=true when updating the document, e.g.
curl -XPUT 'https://my-dev-es-instance.com/cases/_doc/11?refresh=true' -d '{"status":"closed", ... }'
But still the problem persists.
Here's the response I got from the above request:
{"_index":"cases","_type":"_doc","_id":"11","_version":2,"result":"updated","forced_refresh":true,"_shards":{"total":2,"successful":1,"failed":0},"_seq_no":70757,"_primary_term":1}
The response seems to verify that the forced_refresh request was received, although it does say out of total 2 shards, 1 was successful and 0 failed. Not sure about the other one, but since I have only 2 nodes, does this mean it updated the secondary?
According to the doc:
To refresh the shard (not the whole index) immediately after the operation occurs, so that the document appears in search results immediately, the refresh parameter can be set to true. Setting this option to true should ONLY be done after careful thought and verification that it does not lead to poor performance, both from an indexing and a search standpoint. Note, getting a document using the get API is completely realtime and doesn’t require a refresh.
Are my expectations reasonable? Is there a better way to do this?
After more testing, I have concluded that my issue was due to application logic error, and not a problem with ElasticSearch. The refresh flag is behaving as expected. Apologies for the misinformation.
we have started using the high level REST client finally, to ease the development of queries from backend engineering perspective. For indexing, we are using the client.update(request, RequestOptions.DEFAULT) so that new documents will be created and existing ones modified.
The issue that we are seeing is, the indexing is delayed, almost by 5 minutes. I see that they use async http calls internally. But that should not take so long, I looked for some timing options inside the library, didn't find anything. Am I missing anything or the official documentation is missing for this?
Since refresh_interval: 1 in your index settings, it means it is never refreshed unless you do it manually, which is why you don't see the data just after it's been updated.
You have three options here:
A. You can call the _update endpoint with the refresh=true (or refresh=wait_for) parameter to make sure that the index is refreshed just after your update.
B. You can simply set refresh_interval: 1s (or any other duration that makes sense for you) in your index settings, to make sure the index is automatically refreshed on a regular basis.
C. You can explicitly call index/_refresh on your index to refresh it whenever you think is appropriate.
Option B is the one that usually makes sense in most use cases.
Several reference on using the refresh wait_for but I had a hard time finding what exactly needed to be done in the rest high level client.
For all of you that are searching this answer:
IndexRequest request = new IndexRequest(index, DOC_TYPE, id);
request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
I'm using Elasticsearch 2.3, and I know that Get API is realtime, i.e. the API retrieves the very recent document regardless of refresh_interval. This operation is totally independent of refresh.
While reading the ES 5.x documentation, I found the following:
By default, the get API is realtime, and is not affected by the
refresh rate of the index (when data will become visible for search).
If a document has been updated but is not yet refreshed, the get API
will issue a refresh call in-place to make the document visible. This
will also make other documents changed since the last refresh visible.
In order to disable realtime GET, one can set the realtime parameter
to false.
I tested and confirmed that this isn't the case on ES 2.3 environment; Get API does not refresh the index although it certainly gets the updated document.
Does this mean that Get API in ES 5.x actually is a very high-cost operation, because so is refresh?
The change will only affect you, if you have an update and GET the document by ID before it has been refreshed. Is this a common scenario in your use case? Then you might want to disable realtime, but the assumption in general is that you should not run into that situation frequently.
This has been discussed on the PR of the change (and explains why the change has been made), so you should find that discussion helpful: https://github.com/elastic/elasticsearch/pull/20102
Overall, the GET API in ES 5.x could be more costly, but it will depend on your actual use case.
Here's my scenario:
I have a page that contains a list of users. I create a new user through my web interface and save it to the server. The server indexes the document in elasticsearch and returns successfully. I am then redirected to the list page which doesn't contain the new user because it can take up to 1-second for documents to become available for search in elasticsearch
Near real-time search in elasticsearch.
The elasticsearch guide says you can manually refresh the index, but says not to do it in production.
...don’t do a manual refresh every time you index a document in production; it will hurt your performance. Instead, your application needs to be aware of the near real-time nature of Elasticsearch and make allowances for it.
I'm wondering how other people get around this? I wish there was an event or something I could listen for that would tell me when the document was available for search but there doesn't appear to be anything like that. Simply waiting for 1-second is plausible but it seems like a bad idea because it presumably could take much less time than that.
Thanks!
Even though you can force ES to refresh itself, you've correctly noticed that it might hurt performance. One solution around this and what people often do (myself included) is to give an illusion of real-time. In the end, it's merely a UX challenge and not really a technical limitation.
When redirecting to the list of users, you could artificially include the new record that you've just created into the list of users as if that record had been returned by ES itself. Nothing prevents you from doing that. And by the time you decide to refresh the page, the new user record would be correctly returned by ES and no one cares where that record is coming from, all the user cares about at that moment is that he wants to see the new record that he's just created, simply because we're used to think sequentially.
Another way to achieve this is by reloading an empty user list skeleton and then via Ajax or some other asynchronous way, retrieve the list of users and display it.
Yet another way is to provide a visual hint/clue on the UI that something is happening in the background and that an update is to be expected very shortly.
In the end, it all boils down to not surprise users but to give them enough clues as to what has happened, what is happening and what they should still expect to happen.
UPDATE:
Just for completeness' sake, this answer predates ES5, which introduced a way to make sure that the indexing call would not return until the document is either visible when searching the index or return an error code. By using ?refresh=wait_for when indexing your data you can be certain that when ES responds, the new data will be indexed.
Elasticsearch 5 has an option to block an indexing request until the next refresh ocurred:
?refresh=wait_for
See: https://www.elastic.co/guide/en/elasticsearch/reference/5.0/docs-refresh.html#docs-refresh
Here is a fragment of code which is what I did in my Angular application to cope with this. In the component:
async doNewEntrySave() {
try {
const resp = await this.client.createRequest(this.doc).toPromise();
this.modeRefreshDelay = true;
setTimeout(() => {
this.modeRefreshDelay = false;
this.refreshPage();
}, 2500);
} catch (err) {
this.error.postError(err);
}
}
In the template:
<div *ngIf="modeRefreshDelay">
<h2>Waiting for update ...</h2>
</div>
I understand this is a quick-and-dirty solution but it illustrates how the user experience should work. Obviously it breaks if the real-world latency turns out to be more than 2.5 seconds. A fancier version would loop until the new record showed up in the page delay (with a limit of course).
Unless you completely redesign ElasticSearch you will always have some latency between the successful index operation and the time when that document shows up in search results.
Data should be available immediately after indexing is complete. Couple of general questions:
Have you checked CPU and RAM to determine whether you are taxing your ES cluster? If so, you may need to beef up your hardware config to account for it. ES loves RAM!
Are you using NAS (network-attached-storage) or virtualized storage like EBS? Elastic recommends not doing so because of the latency. If you can use DAS (direct-attached) and SSD, you'll be in much, much better shape.
To give you an AWS example, moving from m4.xlarge instances to r3.xlarge made HUGE performance improvements for us.
we've got a GWT application with a simple search mask displaying the results as a grid.
Server side processing time is ok as well as network latency.
Client rendering time is ok even on low spec hardware with internet explorer 6 as long as the number of results is not too high (max 100 rows in the grid).
We have implemented a navigation scheme allowing the user to scroll up/down the grid. That's fast enough also.
Has anybody an idea if it is possible to display the first 100 results immediately and pull the rest in the background? The GWT architecture allows this. However I'm interested in possible pitfalls e.g. what happens if the user starts another query while the browser is still fetching previous results etc.
Thanks!
Holger
LazyPanel and this blog post might be a good starting point for you :)
The GWT Incubator has also many interesting (albeit not always complete/perfect/stable) tables and other pagination solutions - like PagingScrollTable.
Assuming your plan is to send the first 100, and then bring the rest, you can use bulks for the rest of the results. then, if a user initiates another search, you just wait for the end of the bulk ( ie, check between bulk retrivals if you have a pending query ).
Another way you can go is assign identifiers to the user searches. this will make the problem of mixed results non-existant, and will also help you with results history for multiple searches.
we found that users love the live grid look & feel, which solves most of those problems, but that might not be optional always.