ElasticSearch refresh effect on cache - caching

In ElasticSearch, does calling Refresh or Flush clear field and filter cache?
I have a write-heavy application, is it better to run refresh or flush, or is there any better approach for this?

A refresh causes new documents to be visible for searching. This happens by writing a new index segment. A new segment can also be made by merging old large ones.
Filter- and field caches are managed per segment, since a segment is immutable. You can use the warmer APIs to ensure caches are pre-warmed before being made available for search. If not, then parts of the cache is essentially "cleared".
A flush in Elasticsearch terms actually calls a Lucene commit. This is quite more expensive.
If you have a write heavy app, you probably want to increase the refresh interval to get better indexing throughput.
There's some more details about these things in these two articles:
Elasticsearch from the Bottom Up, Part 1
Elasticsearch Refresh Interval vs Indexing Performance

Related

Elasticsearch: When serving a read request, why not try to find the document in the memtable first to achieve real-time query?

In Elasticsearch official document Near real-time search, it says that
In Elasticsearch, this process of writing and opening a new segment is called a refresh. A refresh makes all operations performed on an index since the last refresh available for search.
By default, Elasticsearch periodically refreshes indices every second, ... This is why we say that Elasticsearch has near real-time search: document changes are not visible to search immediately, but will become visible within this timeframe.
I feel a little confused: when serving a read request, why not try to find the document in memtable first, then in the on-disk segment, if so, we do not need to wait the refresh, which makes the real time query possible.
Really good question, but to understand it why Elasticsearch doesn't serve a search request from in-memory documents, we will have to little deep and understand why segments are created in first place and why they are immutable.
As you might be aware that segments are the actual physical files that stores the data of search index, and segments are immutable and this immutability provides a lot of benefits such as
Segments can be cached.
Segments can be used in multi-threaded Environments without worrying about the state being change.
Now as segments are cached and can be used in multi-threaded Environment, it's much easier to use the file system cache to provide the faster search, of-course that means sometime, you will not have a newer copy of data but thats a trade-off than iterating through the memtable which is still being modified and still can show the old version of the document(so still you have a near real time data), and can't be cached as its not immutable so every search thread will end up searching on a dataset which is always in motion and if you apply the locking on memtable while searching, it would reduce the indexing speed.
Btw, this is design from Lucene and Elasticsearch uses that as a library so it's not really Elasticsearch which controls that.
Bottomline, even if you search on memtable without locking and blocking updates while searching, you can't show the real time data and this would considerably slow both indexing and search speed.
Hope this helps.

Why segment merge in Elasticsearch requires stopping the writes to index

I am looking to run the optimize(ES 1.X) which is now known as forcemerge API in ES latest version. After reading some articles like this and this. it seems we should run it only on read-only indices, quoting the official ES docs:
Force merge should only be called against read-only indices. Running
force merge against a read-write index can cause very large segments
to be produced (>5Gb per segment)
But I don't understand the
Reason behind putting index on read-only mode before running forcemerge or optimize API.
As explained in above ES doc, it could cause very large segments which shouldn't be the case as what I understand is that, new updates are first written in memory which are written to segments when refresh happens, so why having write during forcemerge can produce the very large segments?
Also is there is any workaround if we don't want to put the index on read-only mode and still run force merge to expunge delete.
Let me know if I need to provide any additional information.
forcemerge can significantly improve the performance of your queries as it allows you to merge the existing number of segments into a smaller number of segments which is more efficient for querying, as segments get searched sequentially. While merging, also all documents marked for deletion get cleaned up.
Merging happens regularly and automatically in the background as part of Elasticsearch‘s housekeeping based on a merge policy.
The tricky thing: only segments up to 5 GB are considered by the merge policy. Using the forcemerge API with the parameter that allows you to specify the number of resulting segments, you risk that the resulting segment(s) get bigger than 5GB, meaning that they will no longer be considered by future merge requests. As long as you don‘t delete or update documents there is nothing wrong about that. However, if you keep on deleting or updating documents, Lucene will mark the old version of your documents in the existing segments as deleted and write the new version of your documents into new segments. If your deleted documents reside in segments larger than 5GB, no more housekeeping is done on them, i.e. the documents marked for deletion will never get cleaned up.
By setting an index to readonly prior to doing a force-merge, you ensure that you will not end up with huge segments, containing a lot of legacy documents, which consume precious resources in memory and on disk and slow down your queries.
A refresh is doing something different: it‘s correct that documents you want to get indexed are first processed in memory, before getting written to disk. But the data structure that allows you to actually find a document (the „segment“) does not get created for every single document right away, as this would be highly inefficient. Segments are only created when the internal buffer gets full, or when a refresh occurs. By triggering a refresh you make a document immediately available for finding. Still the segment at first only lives in memory, as - again - it would be extremely inefficient to immediately sync every segment to disk right after it got created. Segments in memory get periodically synced to disk. Even if you pull the plug before a sync to disk happened you don‘t lose any information, as Elasticsearch maintains a translog that will allow Elasticsearch to „replay“ all indexing request that did not make it yet into a segment on disk.

How elastic search handles parallel index refresh requests?

In our project, we are hitting the elastic search's index refresh api after each create/update/delete operation for immediate search availability.
I want to know, how elastic search will perform if multiple parallel requests are made to its refresh api on single index having close to 2.5million documents?
any thoughts? suggestions?
Refresh is an operation where ElasticSearch asks Lucene shard to commit modification on disk and create a segment.
If you ask for a refresh after every operation you will create a huge number of micro-segments.
Too many segments make your search longer as your shard need to sequentially search through all of them in order to return a search result. Also, they consume hardware resources.
Each segment consumes file handles, memory, and CPU cycles. More important, every search request has to check every segment in turn; the more segments there are, the slower the search will be.
from the definitive guide
Lucene will merge those segments automatically into bigger segments, but that's also an I/O consuming task.
You can check this for more details
But from my knowledge, a refresh on a 2.5 billion documents index will take the same time in a 2.5k document index.
Also, it seems ( from this issue ) that refresh is a non-blocking operation.
But its a bad pattern for an elasticsearch cluster. Are every CUD operation of your application in need for a refresh ?

How to compare performance on neo4j queries without cache?

I've been trying to compare queries performance in neo4j.
In order to make the queries more efficient, I added index, analysed the result using profile, and tried doing the same while using the USING INDEX.
On most queries, DB Hits were much better using the second option (with the USING INDEX), rows were the same or less, but the time performance seems not to be reliable: on several queries adding the USING INDEX was slower though the better performance parameters (db hits & rows)and times got much better by re-executing a query.
In order to stop the cache's interfering, went to the the properties file, changed the cache_type in the neo4j.properties to none and restarted neo, but it still seems like the results of the same query comes faster each time (until a certain point).
What will be the best way to test it?
Neo4j has (up to 2.2.x) a two layered cache architecture. With cache_type=node you switch of just the object cache. To disable page cache, you can use dbms.pagecache.memory=0. However if all caches are disabled you basically measure the speed of your IO subsystem since every query goes down to the bare metal and reads from disc.
I recommend a different approach: enable both caches and run the queries you want to compare multiple times to warm up caches. Take measurement on warmed cache since this is much closer to a real production scenario.
On a side note: in Neo4j 2.3 the object cache will go away and we just have the page cache.

elasticsearch ttl vs daily dropping tables

I understand that there are two dominant patterns for keeping a rolling window of data inside elasticsearch:
creating daily indices, as suggested by logstash, and dropping old indices, and therefore all the records they contain, when they fall out of the window
using elasticsearch's TTL feature and a single index, having elasticsearch automatically remove old records individually as they fall out of the window
Instinctively I go with 2, as:
I don't have to write a cron job
a single big index is easier to communicate to my colleagues and for them to query (I think?)
any nightmare stream dynamics, that cause old log events to show up, don't lead to the creation of new indices and the old events only hang around for the 60s period that elasticsearch uses to do ttl cleanup.
But my gut tells me that dropping an index at a time is probably a lot less computationally intensive, though tbh I've no idea how much less intensive, nor how costly the ttl is.
For context, my inbound streams will rarely peak above 4K messages per second (mps) and are much more likely to hang around 1-2K mps.
Does anyone have any experience with comparing these two approaches? As you can probably tell I'm new to this world! Would appreciate any help, including even help with what the correct approach is to thinking about this sort of thing.
Cheers!
Short answer is, go with option 1 and simply delete indexes that are no longer needed.
Long answer is it somewhat depends on the volume of documents that you're adding to the index and your sharding and replication settings. If your index throughput is fairly low, TTLs can be performant but as you start to write more docs to Elasticsearch (or if you a high replication factor) you'll run into two issues.
Deleting documents with a TTL requires that Elasticsearch runs a periodic service (IndicesTTLService) to find documents that are expired across all shards and issue deletes for all those docs. Searching a large index can be a pretty taxing operation (especially if you're heavily sharded), but worse are the deletes.
Deletes are not performed instantly within Elasticsearch (Lucene, really) and instead documents are "marked for deletion". A segment merge is required to expunge the deleted documents and reclaim disk space. If you have large number of deletes in the index, it'll put much much more pressure on your segment merge operations to the point where it will severely affect other thread pools.
We originally went the TTL route and had an ES cluster that was completely unusable and began rejecting search and indexing requests due to greedy merge threads.
You can experiment with "what document throughput is too much?" but judging from your use case, I'd recommend saving some time and just going with the index deletion route which is much more performant.
I would go with option 1 - i.e. daily dropping of indices.
Daily Dropping Indices
pros:
This is the most efficient way of deleting data
If you need to restructure your index (e.g. apply a new mapping, increase number of shards) any changes are easily applied to the new index
Details of the current index (i.e. the name) is hidden from clients by using aliases
Time based searches can be directed to search only a specific small index
Index templates simplify the process of creating the daily index.
These benefits are also detailed in the Time-Based Data Guide, see also Retiring Data
cons:
Needs more work to set up (e.g. set up of cron jobs), but there is a plugin (curator) that can help with this.
If you perform updates on data then all versions of a document data will need to sit in the same index, i.e. multiple indexes won't work for you.
Use of TTL or Queries to delete data
pros:
Simple to understand and easily implemented
cons:
When you delete a document, it is only marked as deleted. It won’t be physically deleted until the segment containing it is merged away. This is very inefficient as the deleted data will consume disk space, CPU and memory.

Resources