I am seeing the following index Unassigned which is very annoying. How do I get rid of it
Those unassigned shards are actually unassigned replicas of your actual shards from the master node.
The main purpose of replicas is for failover: if the node holding a primary shard dies, then a replica is promoted to the role of primary.
At index time, a replica shard does the same amount of work as the primary shard. New documents are first indexed on the primary and then on any replicas. Increasing the number of replicas does not change the capacity of the index.
However, replica shards can serve read requests. If, as is often the case, your index is search-heavy, you can increase search performance by increasing the number of replicas, but only if you also add extra hardware.
In order to assign these shards, you need to run a new instance of elasticsearch to create a secondary node to carry the data replicas. (The node can be master eligible or just a workhorse. Of course, you can set those configurations in the elasticsearch config files)
For more details about it you can refer to the official documentation and the Elasticsearch Definitive Guide (the work on it is still in progress but you will find what you are looking for here)
Related
I am beginner to Elastic search ,
I have an elastic search cluster with 3 nodes in ec2 AWS, from these nodes 2 are spot instances, and one is on-demand instances. Since shards and replicas are distributed among all nodes. I need to set that on-demand instance as a primary node means there should be allocated a replica of every index in the 3 nodes into the primary node(on-demand instance). So if even the spot instances get terminated I will not lose the data. Is there an way to configure it. Thanks in advance.
If I understand correctly, you want to allocate primary shard to the on-demand instance?
Primary/replica shard level filtering is not possible currently. However, if your concern is regarding data loss, you don't have to worry so much because if the primary shards goes down one of the replica shard gets promoted to primary shard.
Do all shards (within index) have the same content?
If yes, more shards = longer propagation (save) time?
If no, when one of shards failed = data is incomplete when merging?
First, you need to understand what is sharding and why it's important in distributed systems like elasticsearch. You can read some good resources on shards here here and here.
Now Coming to your question,
Do all shards (within index) have the same content.
The answer, is no (assuming you are referring to primary shards here, of course, replica shard is just a copy of primary shard), let's take an example.
Your Index contains around 100 million docs and you have a 10 data nodes cluster, then you want to horizontally scale your index, so you started with the setting of 10 primary shards and 1 replica shards. In this case, elasticsearch will physically divide your data into 10 primary shards and each primary shard will be on a different node of a cluster as there are 10 data nodes and similarly every primary shards copy which is called replica of a shard which is on a different node of its primary shard.
Now coming to your follow-up question.
If yes, more shards = longer propagation (save) time? If no, when one
of shards failed = data is incomplete when merging?
As elasticsearch doesn't store the same data in all the primary shards, so more shards mean longer propagation or save time is invalid and also when one of the shards is failed then elasticsearch recover its data from its replica shard as it's present physically on a different data node server.
Bonus tip:- Shards are used to split your data and to make your application horizontal scalable, while the replica is to make your application is highly available as it contains the duplicated data, so the application can recover easily from the scenario you just asked in your follow-up question.
Let me know if you need any clarification or more details.
short answer:
Q-1: no
if-no: if index has not a replica, it affects the whole index but not other shards of the index .
please read this document:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/_basic_concepts.html
i have following picture in cluster i am using cerebro. It seems to be all shards on 3rd-node.
And if data comes on i see load on 1rd node > 4 and another nodes are ok.
Logstash -> LB -> ES-nodes (1,2,3). What i am doing wrong?
Thank you in advance.
The high load on that one particular node could be for a couple reasons. The ones that initially spring to mind:
If it is the Master Node then the large number of shards could be having an adverse affect.
You could be sending numerous large read requests to that one particular node so it has to deal with all the aggregations. E.g. if you have Kibana connected to that node.
Some general notes:
The shards with the solid box are the primary shards. The shards with the dotted box are replica shards. You currently have primaries = 8 and replicas = 2. This means there are 8 primary shards per index, and each of those has 2 replica shards. There is much more info about shards in the ES guide. It's for an old version of ES but is still valid.
The fact that all your primary shards are on the same node is a coincidence. This will often happen if you have one node start up before the others. All the primary shards will be allocated to it, then the replicas will go onto other nodes once they start up. If you take down your first node you should see the primaries move to other nodes.
To the left of the node name will be a star. The one with the filled in star is the currently elected Master. Due to your number of shards the master will have a large overhead, relatively speaking. This is because it has to manage so many shards. Try setting "number_of_shards":3, "number_of_replicas":1. Note that those numbers are only applied to new indexes so recreate your indexes to see this take affect.
Your unicast settings are correct.
I have manually allocated 3 primary shards to a particular node in ElasticSearch. The replicas of these shards reside in different nodes. Now, let's say, primary shard number 2 goes down (for example, due to overflow of data) without the node on which it is residing going down. Then is it possible to retrieve the data residing on that particular shard, after I manually re-allocate it to a different node? If yes, how?
Yes.
Once the node with primary shard number 2 goes down then the replica shard on the other node will be upgraded to a primary shard - allowing you to retrieve the data. See here:
Coping with failure (ES Definitive Guide)
Using ES 1.5.2
When a primary shard is unassigned. Does it mean the data is lost?
I had some unassigned primaries tried to re-allocate them with the allocate command, but the index didn't go back to it's original size.
There shouldn't be any data loss, one of the the master node's roles is specifically to retrieve all the data in the soon-to-be unassigned shard and dispatch it across all the other available shards.