I have an Elasticsearch (v5.6.10) cluster with 3 nodes.
Node A : Master
Node B : Master + Data
Node C : Master + Data
There are 6 shards per data node with replication set as 1. All 6 primary nodes are in Node B and all 6 replicas are in Node C.
My requirement is to take out the Node B, do some maintenance work and put it back in cluster without any downtime.
I checked elastic documentation, discussion forum and stackoverflow questions. I found that I should execute the below request first in order to to allocate the shards on that node to the remaining nodes.
curl -XPUT localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : <Node B IP>
}
}';echo
Once all the shards have been reallocated I can shutdown the node and do my maintenance work. Once I am done, I have to include the node again for allocation and Elasticsearch will rebalance the shards again.
Now I also found another discussion where the user faced an issue of yellow/red cluster health issue due to having only one data node but wrongly setting replication as one, causing unassigned shards. It seems to me while doing this exercise I am taking my cluster towards that state.
So, my concern here is whether I am following the correct way keeping in mind all my primary shards are in the node (Node B) which I am taking out of a cluster having replication factor as 1.
With only two data nodes and you wanting to shut down one, you can't really reallocate the shards. Elasticsearch never allocates primary and replica shards on the same nodes; it wouldn't add any benefit in availability or performance and would only double the disk space. So your reallocation command wouldn't add any benefit here since the shards cannot be moved anywhere.
Do a synced flush and then an orderly shutdown of the node. The replica shards on the remaining node will automatically be promoted to primary shards. Your cluster will go yellow until the other node joins again, but there isn't really a way around it in your scenario (without being either a hack or an overkill). But this is fine — as long as you always have a replica it will be on the other node and your cluster will keep working as expected.
Related
I have ElasticSearch 7.16.2 cluster running with three nodes (2 master , 1 Voting node). An index has two primary shards and two replicas, and on restarting a node, both primary shards move to single node. How to restrict index in a nodes to have one primary shard and one replica each.
You can use the index level shard allocation settings to achieve that, it might be not that straight forward and it's a bit complex setting and can cause further unbalance when you have a changing nodes and indices in the cluster.
In order to avoid the issue which happens on the node restart, you must disable the shard allocation and shard rebalance before starting your nodes in Elasticsearch cluster.
Command to disable allocation
PUT /_cluster/settings
{
"persistent":{
"cluster.routing.allocation.enable": "all"
}
}
Command to disable rebalance
PUT /_cluster/settings
{
"persistent":{
"cluster.routing.rebalance.enable": "all"
}
}
Apart from that, you can use the reroute API to manually move the shards to a node in Elasticsearch to fix your current shard allocation.
the config is index.routing.allocation.total_shards_per_node. but you have a problem. first of all I assume you have three data node. (if you don't have, increase the data nodes.).
the problem is you have 4 primary and replica shard in total and one node must assign two shards to itself. so you could not the set index.routing.allocation.total_shards_per_node to 1. at least it must be 2 and your problem not solved.
the config is dynamic: https://www.elastic.co/guide/en/elasticsearch/reference/master/increase-shard-limit.html
also you could set cluster.routing.allocation.total_shards_per_node config for cluster.
I have configured two node cluster and created an index with 4 shards and 1 replica
Elastic created 2 primary shards on each node, this is how it looks from head plugin. shard 1, shard 3 are primary on node 1(stephen1c7) AND shard 0 and shard 2 are primary on node 2(stephen2c7)
Shutdown one Node
Now i have shutdown the node 2(stephen2c7) to see if all the shards on node 1(stephen1c7) became primary. Yes, all shards are now primary.
UP the shutdown node
Now i have made the Node 2(stephen2c7) up again to see if any shards on this node will be primary. But surprisingly no shard on this node became primary. Waited for long time but still no shard is primary on Node 2.
Why so?
Is there any configuration to set for making the shards primary again after a node is up?
Thanks in advance!
Given this post and this one (albeit slightly old),balancing the primary and replica shard across the cluster does not seem to be a priority in an Elastic cluster. As you can see, Elastic sees replica and primary shard and thus the status seems satisfactory for the cluster.
What I would suggest is to have a look at the shard balancing heuristic and play with these values until you obtain a satisfactory result. (as is often the case with ElasticSearch, testing several parameters is what will yield the best configuration given your architectural design choices).
Note that if you start using shard balancing heuristic, you might not get good results if you use at the same time shard filtering or forced awareness
I have an Elasticsearch cluster with 11 nodes. Five of these are data nodes and the other ones are client nodes from where I add and retrieve documents.
I am using the standard Elasticsearch configuration. Each index has 5 shards and replicas. In the cluster I have 55 indices and round about 150GB of data.
The cluster is very slow. With the Kopf plugin I can see the stats of each node. There I can see that one single data node (not the master) is permanently overloaded. Heap, disk, cpu are ok, but load is almost every time 100%. I have noticed, that every shard is a primary shard whereas all other data nodes have both primary shards and replicas. When I shutdown that node and then on again, the same problem occurs at another data node.
And I don't know why and how to solve this problem. I thought that the client nodes and the master node distribute the requests evenly? Why is always one data node overloaded?
Try the following settings:
cluster.routing.rebalance.enable:
Enable or disable rebalancing for specific kinds of shards:
all - (default) Allows shard balancing for all kinds of shards.
primaries - Allows shard balancing only for primary shards.
replicas - Allows shard balancing only for replica shards.
none - No shard balancing of any kind are allowed for any indices.
cluster.routing.allocation.allow_rebalance:
Specify when shard rebalancing is allowed:
always - Always allow rebalancing.
indices_primaries_active - Only when all primaries in the cluster are allocated.
indices_all_active - (default) Only when all shards (primaries and replicas) in the cluster are allocated.
cluster.routing.allocation.cluster_concurrent_rebalance:
Allow to control how many concurrent shard rebalances are allowed cluster wide.
Defaults to 2
Sample curl to apply desired settings:
curl -XPUT <elasticsearchserver>:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.rebalance.enable" : "all"
}
}
You can replace transient with persistent if you want your settings persist across restarts.
I am reading "Elasticsearch: The Definitive Guide" and I would like to confirm something.
When we create an index, it will be assigned to 5 shards by default (or we can use the "number_of_shards" setting).
But if I am using just one node (one server), will the index be spread into 5 shards in the same node? I guess what I am asking is - can a node have multiple shards?
Yes a node can have multiple shards of one or more indices. You can verify it for yourself by executing the GET _cat/shards?v command. Read more about the command here. The problem with having a single node Elasticsearch cluster is that replica shards for indices will not be allocated (but primary shards will be) as it does not make sense to have both the primary and replica of the same shard on the same machine.
I have set up a cluster with two nodes but I have some confusions about shard and replica.
What I intend is a setup where there is a master(node A) handling write and a slave(node B) that helps with read and search operation. Ideally if the master is not functional I can recover the data from the slave.
I read that the default is 5 shards and 1 replica. Does it mean that my primary data would then be automatically split between node A and node B. Would that means if one node is down I would lost half the data?
Given the description of my need above, am I doing it right?
The only config I have changed at this point is the following
cluster:
name: maincluster
node:
name: masternode
master: true
I am really new to elasticsearch and please kindly point out if I am missing anything.
5 shards and 1 replica means that your data will be split into 5 shards per index.
Each shard will have one replica (5 more backup shards) for a total of 10 shards spread across your set of nodes.
The replica shard will be placed onto a different node than the primary shard (so that if one node fails you have redundancy).
With 2 nodes and replication set to 1 or more, losing a node will still give you access to all of your data, since the primary shard and replication shard will not ever be on same node.
I would install the elasticsearch head plugin it provides a very graphical view of nodes and shards (primary and replica).