Elastic Search Scroll API effect on CPU - elasticsearch

What is the effect of scroll API on the CPU utilization of a node? I am experiencing high CPU utilization for scroll API on ES version 6.2.
Even though the query is done once to fetch all the data, and then the data is fetched using scroll_id, we experience CPU spikes.
Also where does the cached result is stored? In memory or on disk?

You should clear your scroll "pointer" after usage.
Search context are automatically removed when the scroll timeout has
been exceeded. However keeping scrolls open has a cost, as discussed
in the previous section so scrolls should be explicitly cleared as
soon as the scroll is not being used anymore using the clear-scroll
API:
As describe here
Normally, the background merge process optimizes the index by merging
together smaller segments to create new bigger segments, at which time
the smaller segments are deleted. This process continues during
scrolling, but an open search context prevents the old segments from
being deleted while they are still in use. This is how Elasticsearch
is able to return the results of the initial search request,
regardless of subsequent changes to documents.
So if y understand well there is no cache. It's just that segments that are targeted by your query are frozen until your scroll expires. As segment are immutable in Lucene it ensures that you will have consistent results and that you will be able to scroll all the data that existed when you created the scroll. But the drawback is as long your scroll "pointer" exists, the targeted segments will be kept open and not deleted.
So the number of opened segments will keep increasing, and the necessary files handlers will also be increasing. So on a wide query and particularly if you are indexing in the same time it can lead to performance issues.
Since when you index, you create a lot of small segments that should be merged afterward, but if you do a scroll query on them, they cant be fully merged and deleted.
Are you indexing continuously and how long is your scroll duration ?
From documentation

Related

Why is ElasticSearch PIT/search_after a lightweight version to scrollAPI?

Elasticsearch provides different ways of paginating through large amounts of data. The scroll API and search_after/PIT both allow a view of the data at a given point in time.
Normally, the background merge process optimizes the index by merging
together smaller segments to create new bigger segments, at which time
the smaller segments are deleted. This process continues during
scrolling, but an open search context prevents the old segments from
being deleted while they are still in use. This is how Elasticsearch
is able to return the results of the initial search request,
regardless of subsequent changes to documents.
It seems that this is achieved in the same way for both. The old segments are prevented from being deleted. However, search_after/PIT is often referred to as the light-weight alternative. Is this purely because a PIT can be shared between queries and therefore not as many PITs should have to be created?

Elasticsearch: When serving a read request, why not try to find the document in the memtable first to achieve real-time query?

In Elasticsearch official document Near real-time search, it says that
In Elasticsearch, this process of writing and opening a new segment is called a refresh. A refresh makes all operations performed on an index since the last refresh available for search.
By default, Elasticsearch periodically refreshes indices every second, ... This is why we say that Elasticsearch has near real-time search: document changes are not visible to search immediately, but will become visible within this timeframe.
I feel a little confused: when serving a read request, why not try to find the document in memtable first, then in the on-disk segment, if so, we do not need to wait the refresh, which makes the real time query possible.
Really good question, but to understand it why Elasticsearch doesn't serve a search request from in-memory documents, we will have to little deep and understand why segments are created in first place and why they are immutable.
As you might be aware that segments are the actual physical files that stores the data of search index, and segments are immutable and this immutability provides a lot of benefits such as
Segments can be cached.
Segments can be used in multi-threaded Environments without worrying about the state being change.
Now as segments are cached and can be used in multi-threaded Environment, it's much easier to use the file system cache to provide the faster search, of-course that means sometime, you will not have a newer copy of data but thats a trade-off than iterating through the memtable which is still being modified and still can show the old version of the document(so still you have a near real time data), and can't be cached as its not immutable so every search thread will end up searching on a dataset which is always in motion and if you apply the locking on memtable while searching, it would reduce the indexing speed.
Btw, this is design from Lucene and Elasticsearch uses that as a library so it's not really Elasticsearch which controls that.
Bottomline, even if you search on memtable without locking and blocking updates while searching, you can't show the real time data and this would considerably slow both indexing and search speed.
Hope this helps.

Understanding the overhead of Elasticsearch search context being kept open from a scroll query or point in time API

The Elasticsearch documentation mentions that a scroll query or point in time operation can put greater pressure on shard's disk, memory, or os via open file handles as older segments cannot be merged.
Is the amount of data retained due to an open search context that would otherwise be deleted proportional to the size of the segment the updated data happened to be on and not proportional to the amount of data that's updated as perceived by the client?
For example, if a client updated a 5KB document and internally the data for this document was on a 10MB segment which ends up getting merged, the entire 10MB segment would be retained when it otherwise would've been deleted. So in essence, the memory/disk impact of this context staying open is 10MB rather than 5KB. Is this correct?
If this is the case, is there any bound on how large a retained segment can be? Would a faster rate of indexing or larger document being indexed result in seeing more memory consumption? Is there anyway to do some back of the envelope calculation based on an applications access patterns to determine what kind of worst case you might expect? - or will there always be some probability that an unlucky update causes a merging operation and a large segment to be retained that can potentially cause resource exhaustion?

Why segment merge in Elasticsearch requires stopping the writes to index

I am looking to run the optimize(ES 1.X) which is now known as forcemerge API in ES latest version. After reading some articles like this and this. it seems we should run it only on read-only indices, quoting the official ES docs:
Force merge should only be called against read-only indices. Running
force merge against a read-write index can cause very large segments
to be produced (>5Gb per segment)
But I don't understand the
Reason behind putting index on read-only mode before running forcemerge or optimize API.
As explained in above ES doc, it could cause very large segments which shouldn't be the case as what I understand is that, new updates are first written in memory which are written to segments when refresh happens, so why having write during forcemerge can produce the very large segments?
Also is there is any workaround if we don't want to put the index on read-only mode and still run force merge to expunge delete.
Let me know if I need to provide any additional information.
forcemerge can significantly improve the performance of your queries as it allows you to merge the existing number of segments into a smaller number of segments which is more efficient for querying, as segments get searched sequentially. While merging, also all documents marked for deletion get cleaned up.
Merging happens regularly and automatically in the background as part of Elasticsearch‘s housekeeping based on a merge policy.
The tricky thing: only segments up to 5 GB are considered by the merge policy. Using the forcemerge API with the parameter that allows you to specify the number of resulting segments, you risk that the resulting segment(s) get bigger than 5GB, meaning that they will no longer be considered by future merge requests. As long as you don‘t delete or update documents there is nothing wrong about that. However, if you keep on deleting or updating documents, Lucene will mark the old version of your documents in the existing segments as deleted and write the new version of your documents into new segments. If your deleted documents reside in segments larger than 5GB, no more housekeeping is done on them, i.e. the documents marked for deletion will never get cleaned up.
By setting an index to readonly prior to doing a force-merge, you ensure that you will not end up with huge segments, containing a lot of legacy documents, which consume precious resources in memory and on disk and slow down your queries.
A refresh is doing something different: it‘s correct that documents you want to get indexed are first processed in memory, before getting written to disk. But the data structure that allows you to actually find a document (the „segment“) does not get created for every single document right away, as this would be highly inefficient. Segments are only created when the internal buffer gets full, or when a refresh occurs. By triggering a refresh you make a document immediately available for finding. Still the segment at first only lives in memory, as - again - it would be extremely inefficient to immediately sync every segment to disk right after it got created. Segments in memory get periodically synced to disk. Even if you pull the plug before a sync to disk happened you don‘t lose any information, as Elasticsearch maintains a translog that will allow Elasticsearch to „replay“ all indexing request that did not make it yet into a segment on disk.

efficiently getting all documents in an elasticsearch index

I want to get all results from a match-all query in an elasticsearch cluster. I don't care if the results are up to date and I don't care about the order, I just want to steadily keep going through all results and then start again at the beginning. Is scroll and scan best for this, it seems like a bit of a hit taking a snapshot that I don't need. I'll be looking at processing 10s millions of documents.
Somewhat of a duplicate of elasticsearch query to return all records. But we can add a bit more detail to address the overhead concern. (Viz., "it seems like a bit of a hit taking a snapshot that I don't need.")
A scroll-scan search is definitely what you want in this case. The
"snapshot" is not a lot of overhead here. The documentation describes it metaphorically as "like a snapshot in time" (emphasis added). The actual implementation details are a bit more subtle, and quite clever.
A slightly more detailed explanation comes later in the documentation:
Normally, the background merge process optimizes the index by merging together smaller segments to create new bigger segments, at which time the smaller segments are deleted. This process continues during scrolling, but an open search context prevents the old segments from being deleted while they are still in use. This is how Elasticsearch is able to return the results of the initial search request, regardless of subsequent changes to documents.
So the reason the context is cheap to preserve is because of how Lucene index segments behave. A Lucene index is partitioned into multiple segments, each of which is like a stand-alone mini index. As documents are added (and updated), Lucene simply appends a new segment to the index. Segments are write-once: after they are created, they are never again updated.
Over time, as segments accumulate, Lucene will periodically do some housekeeping in the background. It scans through the segments and merges segments to flush the deleted and outdated information, eventually consolidating into a smaller set of fresher and more up-to-date segments. As newer merged segments replace older segments, Lucene will then go and remove any segments that are no longer actively used by the index at large.
This segmented index design is one reason why Lucene is much more performant and resilient than a simple B-tree. Continuously appending segments is cheaper in the long run than the accumulated IO of updating files directly on disk. Plus the write-once design has other useful properties.
The snapshot-like behavior used here by Elasticsearch is to maintain a reference to all of the segments active at the time the scrolling search begins. So the overhead is minimal: some references to a handful of files. Plus, perhaps, the size of those files on disk, as the index is updated over time.
This may be a costly amount of overhead, if disk space is a serious concern on the server. It's conceivable that an index being updated rapidly enough while a scrolling search context is active may as much as double the disk size required for an index. Toward that end, it's helpful to ensure that you have enough capacity such that an index may grow to 2–3 times its expected size.

Resources