I'm testing Azure Service Fabric On Premise functionalities and I've some troubles with the cluster installed with the default configuration files provided.
As soon as some of the nodes are offline (I shutdown the host), all the cluster became unresponsive (for example : the Service Fabric Explorer became unavailable on all nodes IPs).
For example :
If I create a 3 nodes cluster (BRONZE), all the cluster became unavailable when I shutdown one node
If I create a 5 nodes cluster (same behavior with BRONZE and SILVER model), all the cluster became unavailable when I shutdown three nodes
If I create a 6 nodes cluster, all the cluster became unavailable when I shutdown three nodes
I also test to disable nodes with Power-Shell after to shutdown it, but the result is the same.
I was thinking that as long as one node was still running, the cluster will be continue to work. But it seems, that the cluster became unavailable as soon as there is 50% of the nodes off, and that the cluster needs a minimum of 3 nodes to operate.
Is it the normal behavior or can I change the configuration ? How can I change it on a On-Premise installation ?
Regards
The minimum size of VMs for the primary node type is determined by the durability tier you choose.
The amount of nodes you can loose is determined by quorum.
Three nodes: with three nodes (N=3), the requirement to create a
quorum is still two nodes (3/2 + 1 = 2). This means that you can lose
an individual node and still maintain quorum
(So your remark about the 3 node cluster doesn't match with the documentation. Are you sure it really became unavailable, not just unhealthy?)
Related
We are currently setting up an environment with two elasticsearch instances (clustered servers).
Since it's clustered, we need to make sure that data (indexes) are synched between the two instances.
We do not have the possibility to setup an additional (3rd) server/instance to act as the 'master'.
Therefore we have configured both instances as master and data nodes. So instance 1 is master & node and instance 2 is also master & node.
The synchronization works fine when both instances are up and running. But when one instance is down, the other instance keeps trying to connect with the instance that is down, which obviously fails because the instance is down. Therefore the node that is up is also not functioning anymore, because it can not connect to his 'master' node (which is the node that is down), even though the instance itself is also a 'master'.
The following errors are logged in this case:
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master];
org.elasticsearch.transport.ConnectTransportException: [xxxxx-xxxxx-2][xx.xx.xx.xx:9300] connect_exception
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: xx.xx.xx.xx/xx.xx.xx.xx:9300
In short: two elasticsearch master instances in a clustered setup. When one is down, the other one does not function because it can not connect to the 'master' instance.
Desired result: If one of the master instances is down, the other should continue functioning (without throwing errors).
Any recommendations on how to solve this, without having to setup an additional server that is the 'master' and the other 2 the 'slaves'?
Thanks
To be able to vote, masters must be a minimum of 2.
That's why you must have a minimum of 3 master nodes if you want your cluster to resist to the loss of one node.
You can just add a specialized small master node by settings all other roles to false.
This node can have very few resources .
As describe in this post :
https://discuss.elastic.co/t/master-node-resource-requirement/84609
Dedicated master nodes need persistent storage, but not a lot of it. 1-2 CPU cores and 2-4GB RAM is often sufficient for smaller deployments. As dedicated master nodes do not store data you can also set the heap to a higher percentage (75%-80%) of total RAM that is recommended for data nodes.
If there are no options to increase 1 more node then you can set
minimum_master_nodes=1 . This will let your es cluster up even if 1 node is up. But it may lead to split brain issue as we restricted only 1 node to be visible to form cluster.
In that scenario you have to restart the cluster to resolve split brain.
I would suggest you to upgrade to elasticsearch 7.0 or above. There you can live with two nodes each master eligible and split brain issue will not come.
You should not have 2 master eligible nodes in the cluster as its a very risky thing and can lead to split brain issue.
Master nodes doesn't require much resources, but as you have just two data nodes, you can still live without having a dedicated master nodes(but please aware that it has downsides) to just save the cost.
So simply, remove master role from another node and you should be good to go.
I'm trying to automate the process of horizontal scale up and scale down of elasticsearch nodes in kubernetes cluster.
Initially, I deployed an elasticsearch cluster (3 master, 3 data & 3 ingest nodes) on a Kubernetes cluster. Where, cluster.initial_master_nodes was:
cluster.initial_master_nodes:
- master-a
- master-b
- master-c
Then, I performed scale down operation, reduced the number of master node 3 to 1 (unexpected, but for testing purpose). While doing this, I deleted master-c, master-b nodes and restarted master-a node with the following setting:
cluster.initial_master_nodes:
- master-a
Since the elasticsearch nodes (i.e. pods) use persistant volume, after restarting the node, the master-a slowing the following logs:
"message": "master not discovered or elected yet, an election requires at least 2 nodes with ids from [TxdOAdryQ8GAeirXQHQL-g, VmtilfRIT6KDVv1R6MHGlw, KAJclUD2SM6rt9PxCGACSA], have discovered [] which is not a quorum; discovery will continue using [] from hosts providers and [{master-a}{VmtilfRIT6KDVv1R6MHGlw}{g29haPBLRha89dZJmclkrg}{10.244.0.95}{10.244.0.95:9300}{ml.machine_memory=12447109120, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 5, last-accepted version 40 in term 5" }
Seems like it's trying to find master-b and master-c.
Questions:
How to overwrite cluster settings so that master-a won't search for these deleted nodes?
The cluster.initial_master_nodes setting only has an effect the first time the cluster starts up, but to avoid some very rare corner cases you should never change its value once you've set it and generally you should remove it from the config file as soon as possible. From the reference manual regarding cluster.initial_master_nodes:
You should not use this setting when restarting a cluster or adding a new node to an existing cluster.
Aside from that, Elasticsearch uses a quorum-based election protocol and says the following:
To be sure that the cluster remains available you must not stop half or more of the nodes in the voting configuration at the same time.
You have stopped two of your three master-eligible nodes at the same time, which is more than half of them, so it's expected that the cluster no longer works.
The reference manual also contains instructions for removing master-eligible nodes which you have not followed:
As long as there are at least three master-eligible nodes in the cluster, as a general rule it is best to remove nodes one-at-a-time, allowing enough time for the cluster to automatically adjust the voting configuration and adapt the fault tolerance level to the new set of nodes.
If there are only two master-eligible nodes remaining then neither node can be safely removed since both are required to reliably make progress. To remove one of these nodes you must first inform Elasticsearch that it should not be part of the voting configuration, and that the voting power should instead be given to the other node.
It goes on to describe how to safely remove the unwanted nodes from the voting configuration using POST /_cluster/voting_config_exclusions/node_name when scaling down to a single node.
Cluster state which also stores the master configuration stores on the data folder of Elasticsearch node, In your case, it seems it is reading the old-cluster state(which is 3 master nodes, with their ids).
Could you delete the data folder of your master-a, so that it can start from a clean cluster state and it should resolve your issue.
Also make sure, other data and ingest node have master.node:false setting as by default it's true.
I currently have single node for elasticsearch in a windows server. Can you please explain how to add one extra node for failover in different machine? I also wonder how two nodes can be kept identical using NEST.
Usually, you don't run a failover node, but run a cluster of nodes to provide High Availability.
A minimum topology of 3 master eligible nodes with minimum_master_nodes set to 2 and a sharding strategy that distributes primary and replica shards over nodes to provide data redundancy is the minimum viable topology I'd consider running in production.
I am leaning some basic concept of cluster computing and I have some questions to ask.
According to this article:
If a cluster splits into two (or more) groups of nodes that can no longer communicate with each other (aka.partitions), quorum is used to prevent resources from starting on more nodes than desired, which would risk data corruption.
A cluster has quorum when more than half of all known nodes are online in the same partition, or for the mathematically inclined, whenever the following equation is true:
total_nodes < 2 * active_nodes
For example, if a 5-node cluster split into 3- and 2-node paritions, the 3-node partition would have quorum and could continue serving resources. If a 6-node cluster split into two 3-node partitions, neither partition would have quorum; pacemaker’s default behavior in such cases is to stop all resources, in order to prevent data corruption.
Two-node clusters are a special case.
By the above definition, a two-node cluster would only have quorum when both nodes are running. This would make the creation of a two-node cluster pointless
Questions:
From above,I came out with some confuse, why we can not stop all cluster resources like “6-node cluster”?What`s the special lies in the two node cluster?
You are correct that a two node cluster can only have quorum when they are in communication. Thus if the cluster was to split, using the default behavior, the resources would stop.
The solution is to not use the default behavior. Simply set Pacemaker to no-quorum-policy=ignore. This will instruct Pacemaker to continue to run resources even when quorum is lost.
...But wait, now what happens if the cluster communication is broke but both nodes are still operational. Will they not consider their peers dead and both become the active nodes? Now I have two primaries, and potentially diverging data, or conflicts on my network, right? This issue is addressed via STONITH. Properly configured STONITH will ensure that only one node is ever active at a given time and essentially prevent split-brains from even occurring.
An excellent article further explaining STONITH and it's importance was written by LMB back in 2010 here: http://advogato.org/person/lmb/diary/105.html
I need to provide many elasticSearch instances for different clients but hosted in my infrastructre.
For the moment it is only some small instances.
I am wondering if it is not better to build a big ElastSearch Cluster with 3-5 servers to handle all instances and then each client gets a different index in this cluster and each instance is distributed over servers.
Or maybe another idea?
And another question is about quorum, what is the quorum for ES please?
thanks,
You don’t have to assign each client to different index, Elasticsearch cluster will automatically share loading among all nodes which share shards.
If you are not sure how many nodes are needed, start from a small cluster then keep monitoring the health status of cluster. Add more nodes to the cluster if server loading is high; remove nodes if server loading is low.
When the cluster continuously grow, you may need to assign a dedicated role to each node. In this way, you will have more control over the cluster, easier to diagnose the problem and plan resources. For example, adding more master nodes to stabilize the cluster, adding more data nodes to increase searching and indexing performance, adding more coordinate nodes to handle client requests.
A quorum is defined as majority of eligible master nodes in cluster as follows:
(master_eligible_nodes / 2) + 1