Elasticsearch cluster leaving shards unassigned - elasticsearch

We're running an elasticsearch cluster for logging, indexing logs from multiple locations using logstash. We recently added two additional nodes for additional capacity whilst we await further hardware for the cluster's expansion. Ultimately we aim to have 2 nodes for "realtime" data running on SSDs to provide fast access to recent data, and ageing the data over to HDDs for older indicies. The new nodes we put in had a lot less memory than the existing boxes (700GB vs 5TB), but given this will be similar to the situation we'd have when we implemented SSDs, I didn't forsee it being much of a problem.
As a first attempt, I threw the nodes into the cluster trusting the new Disk spaced based allocation rules would mean they wouldn't instantly get filled up. This unfortunately wasn't the case, I awoke to find the cluster had merrily reallocated shards onto the new nodes, in excess of 99%. After some jigging of settings I managed to remove all data from these nodes and return the cluster to it's previous state (all shards assigned, cluster state green).
As a next approach I tried to implement index/node tagging similar to my plans for when we implement SSDs. This left us with the following configuration:
Node 1 - 5TB, tags: realtime, archive
Node 2 - 5TB, tags: realtime, archive
Node 3 - 5TB, tags: realtime, archive
Node 4 - 700GB, tags: realtime
Node 5 - 700GB, tags: realtime
(all nodes running elasticsearch 1.3.1 and oracle java 7 u55)
Using curator I then tagged indicies older than 10days as "archive" and more recent ones "realtime". This in the background sets the index shard allocation "Require". Which my understanding is it will require the node to have the tag, but not ONLY that tag.
Unfortunately this doesn't appeared to have had the desired effect. Most worryingly, no indices tagged as archive are allocating their replica shards, leaving 295 unassigned shards. Additionally the realtime tagged indicies are only using nodes 4, 5 and oddly 3. Node 3 has no shards except the very latest index and some kibana-int shards.
If I remove the tags and use exclude._ip to pull shards off the new nodes, I can (slowly) return the cluster to green, as this is the approach I took when the new nodes had filled up completely, but I'd really like to get this setup sorted so I can have confidence the SSD configuration will work when the new kit arrives.
I have attempted to enable: cluster.routing.allocation.allow_rebalance to always, on the theory the cluster wasn't rebalancing due to the unassigned replicas.
I've also tried: cluster.routing.allocation.enable to all, but again, this has had no discernable impact.
Have I done something obviously wrong? Or is there disagnostics of some sort I could use? I've been visualising the allocation of shards using Elasticsearch Head plugin.
Any assistance would be appreciated, hopefully it's just a stupid mistake that I can fix easily!
Thanks in advance

This probably doesn't fully answer your question, but seeing as I was looking at these docs this morning:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk
You should be able to set watermarks on disk usage in your version to avoid this reoccurring.
For (manual) monitoring of clusters I quite like
https://github.com/lmenezes/elasticsearch-kopf
Currently watching my cluster sort out it's shards again (so slow) after a similar problem, but I'm still running an ancient version.

Related

How to autoscale elastic search in kubernates based on load?

I am using Google Cloud and I am doing RnD whether we can apply HPA (Horizontal Pod Auto scaling) on elasticsearch in Kubernetes.
I did elasticsearch set up on Kubernetes : https://github.com/elastic/helm-charts/tree/master/elasticsearch
But I found one of the post on forum they say elasticsearch HPA is hard
https://discuss.elastic.co/t/how-to-scale-up-and-down-nodes-automatically/224089/2
So is it possible to do HPA on elasticsearch or not ?
I don't think it would work well and you'd be at risk of losing data. HPA tends to respond to load changes on a scale of a minute or so, and can occasionally make large changes (scaling from 5 replicas to 2, for example). For Elasticsearch you need to scale in one node at a time, monitor the state of the cluster before you can proceed, and it could take a long time before you can move from one node to the next.
Say you're running Elasticsearch in a StatefulSet. Remember that each ES index is made up of shards; you will have some number of copies of the shards spread out across the nodes in the ES cluster. So also let's say your index has 10 shards with 2 replicas each.
Scaling out is easy. Increase the size of the StatefulSet as much as you want; configure each node to talk to es-0 for discovery. ES will see that the cluster has grown and start moving shards on to the new nodes on its own.
Scaling in is hard. You can only delete one node at a time. Once that node is shut down and the cluster realizes it's missing, then the shards that used to be on that node will be under-replicated. ES will create new replicas on the remaining nodes and copy the shard data there. You can observe this happening through things like the /_cat/shards API. The index status will be "yellow" as long as there are under-replicated shards, and then will switch back to "green" once the replication sequence finishes.
So say you currently have 8 nodes, and you want to scale down to 6. There's some possibility the only two replicas of a given shard will be on es-6 and es-7 so you can't turn those both off together; you have to turn off es-7 first, wait for replication to catch up, then turn off es-6. There's also some possibility that, when you turn off es-7, the new replicas will be created on the doomed node es-6.
You can also tell Elasticsearch to move shards off of a replica before removing it. This avoids the cluster going into the "yellow" state, but it's harder to monitor.
Finally, the re-replication can take a long time depending on how much data is actually in the cluster. (In a past cluster I maintained with a badly-designed index structure, it took multiple hours to shut down a node.) That's a much slower operation than HPA is prepared for.

number of nodes in elasticsearch cluster

in our university we have an elasticsearch cluster with 1 Node. Now we have money to install more powerful server. We produce 7-10 millions accesslogs / day.
What is better to create a cluster with:
a. 3 powerful server each 64GB and 16 CPU + SSD.
b. to have 14 not so powerful server each 32GB and 8CPU +SSD
ps: a & b have the same price.
c. may be some recommendation?
Thank you in advance
it depends on the scenario. for the logging case you describing option b seems more flexible to me. let me explain my opinion:
as you are in a logging scenario, then implement the hot/warm architecture. you'll mainly write and read recent indices. in few cases you want to access older data and you probably want to shrink old and close even older indices.
set up at least 3 master eligble nodes to prevent spit brain problems. configure the same nodes also as coordinating nodes (11 nodes left)
install 2 ingest nodes to move the ingestion workload to dedicated nodes (9 nodes left)
install 3 hot data nodes for storing the most recent indices (6 nodes left)
install 6 warm data nodes for holding older, shrinked and closed indices. (0 nodes left)
the previous setup is just a example. the node numbers/roles should be changed if
if you need more resiliency. then add more master nodes, increase replica count for the index nodes. this will also reduce the total capacity.
the more old data you need to have searchable or being held in already closed indices, the more warm nodes you'll need. then rebalance the hot/warm node count according to you needs. if you can drop your old data early then increase the hot node count.
if you have xpack licensed, consider installing ml/alerting nodes. add this roles to the master nodes or reduce the data nodes count in favor of ml/alertig.
do you need kibana/logstash? depending on the workload, prepare one/two nodes exclusively.
assuming there are the same mainboards in both options you have more potential to quickly scale the 14 boxes up just by adding more ram/cpu/storage. having 3 nodes already maxed out at the specs, you'll need to set up new boxes and join them the cluster in order to scale up. but this also gives you maybe more recent hardware in you rack over the time.
please also have a look on this: https://www.elastic.co/pdf/architecture-best-practices.pdf
if you need some background on sharding configuration please see ElasticSearch - How does sharding affect indexing performance?
BTW: thomas is right with his comment about the heap size. please have a look on this if you want to know the background: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

setting up a basic elasticsearch cluster

Im new to elasticsearch and would like someone to help me clarify a few concepts
Im designing a small cluster with the following requirements
everything should still work when restarting one of the machines, one at a time (eg: OS updates)
a single disk failure is ok
heavy indexing should not impact query performance
How many master, data, ingest nodes should I have?
or do I need 2 clusters?
the indexing workload is purely indexing structured text documents, no processing/rules... do I even need an ingest node?
Also, does each node have a complete copy of the all the data? or only a cluster has the complete copy?
Be sure to read the documentation about Elasticsearch terminology at the very least.
With the default of 1 replica (primary shard and one replica shard) you can survive the failure of 1 Elasticsearch node (failed disk, restart, upgrade,...).
"heavy indexing should not impact query performance": You'll need to size your cluster correctly to handle both the indexing and searching. If you want to read current data and you do heavy updates, that will take up resources and you won't be able to fully decouple it.
By default every node is a data, ingest, and master-eligible node. The minimum HA setting needs 3 nodes. If you don't use ingest that's fine; it won't take up resources when you're not using it.
To understand which node has which data, you need to read up on the concept of shards. Basically every index is broken up into 1 to N shards (current default is 5) and there is one primary and one replica copy of each one of them (by default).

Elasticsearch - general architecture and Elastic Cloud questions

Background
We're designing the architecture of a new system using Elasticsearch now, and we plan to use Elastic Cloud based on reviews contrasting their service with AWS's, and self-hosting on an EC2 instance. As we design the system, I'm trying to learn from a small test project my team deployed on Elastic Cloud 6 months ago. While I've spent a lot of time reading the Elasticsearch Docs, Elasticsearch: The Definitive Guide, and Elastic Cloud's Docs, there are some concepts here that I'm still not understanding.
Our Test Project's issues
Our test project uses the default of 5 primary shards and 1 replica shard per primary. It was configured using the default deployment options on Elastic Cloud with a single one node, currently with 2GB of memory. Because there is only one node, and because replica shards are never assigned to the same node as their primary shard (reason 2), none of the replicas are getting assigned. Also, this project uses time-based data, and is creating one index per account per day, resulting in about 10 indexes per day (or 100 shards), and over time, the proverbial Kagillion Shards. This system was only ever meant to have several months of data on it at a time, so the solution has been to manually delete old data when memory on this deployment runs out.
The New System
Our new system is meant to have 5 years worth of time based-data on it, which is projected to grow to 250 GB in size. The current implementation uses a single index for the time-based data, with 6 primary shards and 1 replica per primary. This decision was made based on reading that a single shard should aim for a maximum of 30GB in size.
Questions
Our old system had one node with too many indexes (over 100) and too many shards (over 1000), and it seems like our new one is being designed with too few (one index for 5+ years of data). It seems a better indexing strategy according to the time-based data recommendations would be to create one index per week or month? That being said, according to another answer on SO the optimal number of indexes per node is 1, so what is the utility in creating multiple indices for time-based data in the first place if we're only running on one node?
How does one add a node to an ES deployment in Elastic Cloud? Currently all of the replica nodes in the test project are unassigned, because the deployment only has one node. There is a slider which allows you to easily choose the memory of each node in a deployment (between 1GB and 250B), however I see no way to add multiple nodes, which is confusing because it seems like basic functionality for Elasticsearch.
Our test project's node has restarted several times, always when there is lots of old data on the node, and therefore memory pressure. The solution has been to delete old data (as the test project was only meant to have several months of data at a time), but it appears the node didn't lose data when it restarted. Why would this be?
Our test project has taken no snapshots, which are supposed to happen automatically on Elastic Cloud every 30 minutes. I've asked their support about this, but just curious to see if anyone knows what could cause this and how to resolve it?
Our test project uses the default of 5 primary shards and 1 replica shard per primary. It was configured using the default deployment options on Elastic Cloud with a single one node
Clearly, on a single node, you cannot have replicas. So your index should have been configured with 0 replicas and you can do it dynamically to get your cluster back to green (PUT index/_settings {"index.number_of_replicas": 0}), simple as that.
Also, this project uses time-based data, and is creating one index per account per day, resulting in about 10 indexes per day (or 100 shards)
I cannot tell if 50 new primary shards (10 index) per day were reasonable or not because you don't give any information regarding the volume of data in your test project. But it's probably too many.
It seems a better indexing strategy according to the time-based data recommendations would be to create one index per week or month?
Having five years worth of data in a single index is perfectly possible, it doesn't really depend on how old the data is, but on how big it grows. You mention 250GB and also that you know a shard shouldn't grow over 30GB (and that again depends on the spec of your hardware underneath, more on that later), but since you have only 6 shards for that index, it means that each shard will grow over 40GB (which is ok according to this), but to be on the safe side, you should probably increase to 8-9 shards, or you split your data into yearly/monthly indices.
The 30GB-ish limit per shard is also dependent on how much heap your nodes have. If you have nodes with 2GB heap, then having 30GB shards is clearly too big. Since you're on ES Cloud and you plan to have 250GB of data, you must have chosen a node capacity of 16GB heap + 384GB storage (or bigger). So with 16GB heap, it's reasonable to have 30GB shards, but you'll need several nodes in my opinion. You can verify how many nodes you have using GET _cat/nodes?v.
That being said, according to another answer on SO the optimal number of indexes per node is 1...
What Chris is saying is a theoretical/ideal setting, which is almost never possible/advisable/desired to do in reality. You do want to have several shards in your index and the reason is that when your data grows, you want to be able to scale to more than one node, that's the whole point of ES, otherwise you'd be better off embedding the Lucene library directly in your project.
..., so what is the utility in creating multiple indices for time-based data in the first place if we're only running on one node?
First check how many nodes you have in your cluster using GET _cat/nodes?v, but clearly if you're assigned a single node for 250GB of data split on 6-8 shards, a single node is not ideal, indeed.
How does one add a node to an ES deployment in Elastic Cloud?
Right now, you can't. However, at the last Elastic{ON} conference, Elastic announced that it will be possible to pick the number of nodes or the kind of deployment (hot/warm, etc) you want to set up.
Currently all of the replica nodes in the test project are unassigned, because the deployment only has one node.
You don't really need replicas in a test project, right?
The solution has been to delete old data (as the test project was only meant to have several months of data at a time), but it appears the node didn't lose data when it restarted. Why would this be?
How did you delete the data? Between the time you deleted the data and before the node restarted, did you witness that the data was indeed gone?
Our test project has taken no snapshots, which are supposed to happen automatically on Elastic Cloud every 30 minutes.
This is weird, since on ES cloud your cluster generally gets snapshotted every 30 minutes. What do you see under Deployments > cluster-id > Elasticsearch > Snapshots? What does the ES Cloud support say about it? What do you get when running GET _cat/repositories?v and GET _cat/snapshots/found-snapshots?v? (update your question with the results)

Shards / Replicas settings for high availability

We have java application with embedded Elasticsearch in a cluster of 14 nodes. All the data resides in a central database, and they are indexed in elasticsearch for querying. A full reindex can be done at any time.
The system are very query-heavy, the amount of writes are small. The number of documents will not be higher than, say, 300.000.
The size of each document varies greatly, from just a couple of ids, to extracted text from e.g word-documents of several pages.
I want to make sure that in case of a total breakdown, it should be sufficient that one or two nodes are available for the system to work.
Write consistency should not be a problem since the master copy of the data is in the database, and it seems that ES is capable of resolving conflicting data by using the newest version (which should be all right in our case)
My first though is to use 1 shard, and 13 replicas. This will naturally ensure that all nodes have access to all data. This could also be accomplished by having 2 shards / 13 replicas, so this yield that to ensure that all data is available, the number of replicas should be the number of nodes - 1, not depending on the number of shards (which could be anything).
If the requirement of number of nodes are reduced to "2 nodes should be up at any time", then a shards / replica distribution of "x/number of nodes - 2" should be sufficient.
So, for the question:
Asserting the above setup and that my thoughts is correct, would a setup with 1 shard / 13 replicas make sense or would there be anything to gain by adding more shards and run e.g a 4 shards/13 replicas setup?
After a good bit of research and talking to ES-gurus;
As long as the shard size is small enough, the most efficient way of setting up this cluster would indeed be 1 shard only, with 13 replicas. I have not been able to pinpoint the threshold size of the shard for this starting to perform worse.
If the index is big... you will need more than one shard (if you want perfomance). Do You really need 13 replica? When you put only 2 replicas, ES manage that to keep it that way, if the principal node fail, ES will create a new reply. May be you will need a balancer node too.

Resources