ElasticSearch Downtime From Closing and Opening Index - elasticsearch

I'm managing an ElasticSearch cluster and I need to add an analyzer to one of my indices. The particular index I want to update is a bit more than 3TB. Will there be an excessive amount of downtime associated with closing and reopening this large of an index to add the analyzer? The documentation doesn't seem to say anything about the processing required to close and open an index.
I have done many rolling restarts and the shard recovery is pretty quick, but I'm guessing that closing and opening an index cannot be done one node at a time with a rolling restart.

As per the official document of open index API
When opening or closing an index, the master is responsible for
restarting the index shards to reflect the new state of the index. The
shards will then go through the normal recovery process. The data of
opened/closed indices is automatically replicated by the cluster to
ensure that enough shard copies are safely kept around at all times.
This clearly explains that its not a cheap operation, and if you have many shards in your cluster and your cluster state is big, updating that to all the nodes can cause significant overhead.
Apart from this, opening and closing an index also allocates the shards, again explained in the same document section of wait for active shards
Because opening or closing an index allocates its shards, the
wait_for_active_shards setting on index creation applies to the _open
and _close index actions as well.
And this one is a major overhead as it involves moving the data ie shards in the cluster and your is a very index, so it can cause huge data movement is your cluster.
Hope this helps.

Related

How does elasticsearch prevent cascading failure after node outage due to disk pressure?

We operate an elasticsearch stack which uses 3 nodes to store log data. Our current config is to have indices with 3 primaries and 1 replica. (We have just eyeballed this config and are happy with the performance, so we decided to not (yet) spend time for optimization)
After a node outage (let's assume a full disk), I have observed that elasticsearch automatically redistributes its shards to the remaining instances - as advertised.
However this increases disk usage on the remaining two instances, making it a candidate for cascading failure.
Durability of the log data is not paramount. I am therefore thinking about reconfiguring elasticsearch to not create a new replica after a node outage. Instead, it could just run on the primaries only. This means that after a single node outage, we would run without redundancy. But that seems better than a cascading failure. (This is a one time cost)
An alternative would be to just increase disk size. (This is an ongoing cost)
My question
(How) can I configure elasticsearch to not create new replicas after the first node has failed? Or is this considered a bad idea and the canonical way is to just increase disk capacity?
Rebalancing is expensive
When a node leaves the cluster, some additional load is generated on the remaining nodes:
Promoting a replica shard to primary to replace any primaries that were on the node.
Allocating replica shards to replace the missing replicas (assuming there are enough nodes).
Rebalancing shards evenly across the remaining nodes.
This can lead to quite some data being moved around.
Sometimes, a node is only missing for a short period of time. A full rebalance is not justified in such a case. To take account for that, when a node goes down, then elasticsearch immediatelly promotes a replica shard to primary for each primary that was on the missing node, but then it waits for one minute before creating new replicas to avoid unnecessary copying.
Only rebalance when required
The duration of this delay is a tradeoff and can therefore be configured. Waiting longer means less chance of useless copying but also more chance for downtime due to reduced redundancy.
Increasing the delay to a few hours results in what I am looking for. It gives our engineers some time to react, before a cascading failure can be created from the additional rebalancing load.
I learned that from the official elasticsearch documentation.

Elasticsearch maximum index count limit

Is there any limit on how many indexes we can create in elastic search?
Can 100 000 indexes be created in Elasticsearch?
I have read that, maximum of 600-1000 indices can be created. Can it be scaled?
eg: I have a number of stores, and the store has items. Each store will have its own index where its items will be indexed.
There is no limit as such, but obviously, you don't want to create too many indices(too many depends on your cluster, nodes, size of indices etc), but in general, it's not advisable as it can have a server impact on cluster functioning and performance.
Please check loggly's blog and their first point is about proper provisioning and below is important relevant text from the same blog.
ES makes it very easy to create a lot of indices and lots and lots of
shards, but it’s important to understand that each index and shard
comes at a cost. If you have too many indices or shards, the
management load alone can degrade your ES cluster performance,
potentially to the point of making it unusable. We’re focusing on
management load here, but running too many indices/shards can also
have pretty significant impacts on your indexing and search
performance.
The biggest factor we’ve found to impact management overhead is the
size of the Cluster State, which contains all of the mappings for
every index in the cluster. At one point, we had a single cluster with
a Cluster State size of over 900MB! The cluster was alive but not
usable.
Edit: Thanks #Silas, who pointed that from ES 2.X, cluster state updates are not that much costly(As the only diff is sent in update call). More info on this change can be found on this ES issue

solr/elasticsearch search request is handled by shards or replica?

I have designed solr/elasticsearch for searching, I have a particular question. suppose I have 10K search request/seconds. so where will be my search on Shards or replica. I know replica is backup of shards.
if it happens on shards then how/why and if its on replica then how/why ?
Primary Shard is the original copy of data, while the replica shard is a copy of your original data.
While Indexing always happens on the original copy ie primary shards and then copied to replica shards, but the search can happen on any of the copy irrespective of original or copy of data.
Hence replicas are not only created for fault-tolerance where if you lose one copy, it can recover from copy of it, But also to improve the search performance where if one shard is overloaded (primary or replica) then search happens on the least loaded copy ie another replica.
Please refer to Adaptive replica selection in ES on how/why replicas improve the search latency.
Feel free to let me know if you need more information.
EDIT based on OP comment:
From ES 7 adaptive replica selection is by default on, so it would send to a least loaded replica but even if all shards are underutilized still it wouldn't send all search requests to primary shards to avoid overloading it. Also before ARS(adaptive replica selection), ES used to send these search requests on round-robin fashion to avoid overloading one shard.

What are the functionalities that index recovery does in ElasticSearch?

I am new to ElasticSearch and I am confused about the meaning of index recovery.
What are the operations index recovery performs?
Does it mean recovering the data inside the index or allocating unassigned shards?
Index recovery means loading shards from disk and making it usable for your query operations. This can happen if you start a node, make new replicas, add or remove new node to your cluster or if some node has crashed and restarting. There can be multiple operations involved in the process. If a shard is coming up, it will ask all other shards what data they have and try to do an integrity check. If a new node has been added and there is no shared disk, then there will be data movements. If a new primary shard is to be selected, then primary should be the one holding most of the data at that time, so nodes need to to be in sync. To handle all these cases, there must be dozens of other tasks being done in recovery process.
According to ElasticSearch Reference:
A recovery event occurs anytime an index shard moves to a different node in the cluster. This can happen during a snapshot recovery, a change in replication level, node failure, or on node startup. This last type is called a local store recovery and is the normal way for shards to be loaded from disk when a node starts up.

Elasticsearch indices recovery

I'm learning how Elasticsearch (version 5.3.0) works in order to try and use it. I've read documentation, Elasticsearch Reference and some ES blog posts too but I couldn't find how indices (shards?) recovery works.
Let assume a node A turn off and, then, become active again. If the cluster didn't stop its activity and some documents were indexed, how are those changes synchronized with the node A? Does ES replace all files or there is a mechanism to communicate only changes to that node?
References and documentation are welcomed.
Thank you in advance for the responses.
These days Elasticsearch is doing a diff between the segments (files) in primary shard and the ones in the replica shard. What is different is copied over new from the primary.
In future though (ES 6), there will be sequence IDs: https://github.com/elastic/elasticsearch/issues/10708
The advantage of having these is that ES will make a first attempt to compare the sequence IDs from the primary and replica and see how "far" they are apart. If the translog from the primary shard still has all the changes since the replica went offline, ES will simply replay the operations in the primary shard translog on the replica shard. If not all the operations are there anymore, then it will get back to the segments diffing (the current approach).

Resources