I have an Elasticsearch cluster with 11 nodes. Five of these are data nodes and the other ones are client nodes from where I add and retrieve documents.
I am using the standard Elasticsearch configuration. Each index has 5 shards and replicas. In the cluster I have 55 indices and round about 150GB of data.
The cluster is very slow. With the Kopf plugin I can see the stats of each node. There I can see that one single data node (not the master) is permanently overloaded. Heap, disk, cpu are ok, but load is almost every time 100%. I have noticed, that every shard is a primary shard whereas all other data nodes have both primary shards and replicas. When I shutdown that node and then on again, the same problem occurs at another data node.
And I don't know why and how to solve this problem. I thought that the client nodes and the master node distribute the requests evenly? Why is always one data node overloaded?
Try the following settings:
cluster.routing.rebalance.enable:
Enable or disable rebalancing for specific kinds of shards:
all - (default) Allows shard balancing for all kinds of shards.
primaries - Allows shard balancing only for primary shards.
replicas - Allows shard balancing only for replica shards.
none - No shard balancing of any kind are allowed for any indices.
cluster.routing.allocation.allow_rebalance:
Specify when shard rebalancing is allowed:
always - Always allow rebalancing.
indices_primaries_active - Only when all primaries in the cluster are allocated.
indices_all_active - (default) Only when all shards (primaries and replicas) in the cluster are allocated.
cluster.routing.allocation.cluster_concurrent_rebalance:
Allow to control how many concurrent shard rebalances are allowed cluster wide.
Defaults to 2
Sample curl to apply desired settings:
curl -XPUT <elasticsearchserver>:9200/_cluster/settings -d '{
"transient" : {
"cluster.routing.rebalance.enable" : "all"
}
}
You can replace transient with persistent if you want your settings persist across restarts.
Related
I have ElasticSearch 7.16.2 cluster running with three nodes (2 master , 1 Voting node). An index has two primary shards and two replicas, and on restarting a node, both primary shards move to single node. How to restrict index in a nodes to have one primary shard and one replica each.
You can use the index level shard allocation settings to achieve that, it might be not that straight forward and it's a bit complex setting and can cause further unbalance when you have a changing nodes and indices in the cluster.
In order to avoid the issue which happens on the node restart, you must disable the shard allocation and shard rebalance before starting your nodes in Elasticsearch cluster.
Command to disable allocation
PUT /_cluster/settings
{
"persistent":{
"cluster.routing.allocation.enable": "all"
}
}
Command to disable rebalance
PUT /_cluster/settings
{
"persistent":{
"cluster.routing.rebalance.enable": "all"
}
}
Apart from that, you can use the reroute API to manually move the shards to a node in Elasticsearch to fix your current shard allocation.
the config is index.routing.allocation.total_shards_per_node. but you have a problem. first of all I assume you have three data node. (if you don't have, increase the data nodes.).
the problem is you have 4 primary and replica shard in total and one node must assign two shards to itself. so you could not the set index.routing.allocation.total_shards_per_node to 1. at least it must be 2 and your problem not solved.
the config is dynamic: https://www.elastic.co/guide/en/elasticsearch/reference/master/increase-shard-limit.html
also you could set cluster.routing.allocation.total_shards_per_node config for cluster.
I have an Elasticsearch (v5.6.10) cluster with 3 nodes.
Node A : Master
Node B : Master + Data
Node C : Master + Data
There are 6 shards per data node with replication set as 1. All 6 primary nodes are in Node B and all 6 replicas are in Node C.
My requirement is to take out the Node B, do some maintenance work and put it back in cluster without any downtime.
I checked elastic documentation, discussion forum and stackoverflow questions. I found that I should execute the below request first in order to to allocate the shards on that node to the remaining nodes.
curl -XPUT localhost:9200/_cluster/settings -H 'Content-Type: application/json' -d '{
"transient" :{
"cluster.routing.allocation.exclude._ip" : <Node B IP>
}
}';echo
Once all the shards have been reallocated I can shutdown the node and do my maintenance work. Once I am done, I have to include the node again for allocation and Elasticsearch will rebalance the shards again.
Now I also found another discussion where the user faced an issue of yellow/red cluster health issue due to having only one data node but wrongly setting replication as one, causing unassigned shards. It seems to me while doing this exercise I am taking my cluster towards that state.
So, my concern here is whether I am following the correct way keeping in mind all my primary shards are in the node (Node B) which I am taking out of a cluster having replication factor as 1.
With only two data nodes and you wanting to shut down one, you can't really reallocate the shards. Elasticsearch never allocates primary and replica shards on the same nodes; it wouldn't add any benefit in availability or performance and would only double the disk space. So your reallocation command wouldn't add any benefit here since the shards cannot be moved anywhere.
Do a synced flush and then an orderly shutdown of the node. The replica shards on the remaining node will automatically be promoted to primary shards. Your cluster will go yellow until the other node joins again, but there isn't really a way around it in your scenario (without being either a hack or an overkill). But this is fine — as long as you always have a replica it will be on the other node and your cluster will keep working as expected.
I have read that when a new indexing request is sent to ES cluster. ES will specify which shard should that document be stored in depending on routing. Then that node which hosts that primary shard (aka coordinating node) will broadcast the indexing request to each node containing a replica for that shard and it will respond to the client that the document has been indexed successfully if the primary shard and it's replicas stored/indexed that document.
Does that mean that ES supports high availability(node tolerant) for reading requests and not for writing request or it's the default behavior and can be changed?
The main purpose of replicas is for failover, if the node holding a primary shard dies, a replica is promoted to the role of primary. Also, replica shards can serve read requests thus improving search performance.
For write requests though, indexing will be affected if one of your nodes (which has the primary shard for a live index) in the cluster suddenly runs out of disk space because if a node disk usage hits configured watermark levels then ES throws a cluster block exception preventing any writes to the node. If ALL nodes are down/ unreachable indexing will stop however if only one or some nodes go down, indexing shouldn't completely stop as replica shards on other nodes are promoted to primary if the node holding the original primary is offline. Ideally, it goes without saying that some analysis and effort should go to right size an ES cluster and have monitoring in place to prevent any issues.
i have following picture in cluster i am using cerebro. It seems to be all shards on 3rd-node.
And if data comes on i see load on 1rd node > 4 and another nodes are ok.
Logstash -> LB -> ES-nodes (1,2,3). What i am doing wrong?
Thank you in advance.
The high load on that one particular node could be for a couple reasons. The ones that initially spring to mind:
If it is the Master Node then the large number of shards could be having an adverse affect.
You could be sending numerous large read requests to that one particular node so it has to deal with all the aggregations. E.g. if you have Kibana connected to that node.
Some general notes:
The shards with the solid box are the primary shards. The shards with the dotted box are replica shards. You currently have primaries = 8 and replicas = 2. This means there are 8 primary shards per index, and each of those has 2 replica shards. There is much more info about shards in the ES guide. It's for an old version of ES but is still valid.
The fact that all your primary shards are on the same node is a coincidence. This will often happen if you have one node start up before the others. All the primary shards will be allocated to it, then the replicas will go onto other nodes once they start up. If you take down your first node you should see the primaries move to other nodes.
To the left of the node name will be a star. The one with the filled in star is the currently elected Master. Due to your number of shards the master will have a large overhead, relatively speaking. This is because it has to manage so many shards. Try setting "number_of_shards":3, "number_of_replicas":1. Note that those numbers are only applied to new indexes so recreate your indexes to see this take affect.
Your unicast settings are correct.
I have set up a cluster with two nodes but I have some confusions about shard and replica.
What I intend is a setup where there is a master(node A) handling write and a slave(node B) that helps with read and search operation. Ideally if the master is not functional I can recover the data from the slave.
I read that the default is 5 shards and 1 replica. Does it mean that my primary data would then be automatically split between node A and node B. Would that means if one node is down I would lost half the data?
Given the description of my need above, am I doing it right?
The only config I have changed at this point is the following
cluster:
name: maincluster
node:
name: masternode
master: true
I am really new to elasticsearch and please kindly point out if I am missing anything.
5 shards and 1 replica means that your data will be split into 5 shards per index.
Each shard will have one replica (5 more backup shards) for a total of 10 shards spread across your set of nodes.
The replica shard will be placed onto a different node than the primary shard (so that if one node fails you have redundancy).
With 2 nodes and replication set to 1 or more, losing a node will still give you access to all of your data, since the primary shard and replication shard will not ever be on same node.
I would install the elasticsearch head plugin it provides a very graphical view of nodes and shards (primary and replica).