A question. Is there any recommandation from how many nodes that you need to use dedicated master nodes in a elasticsearch cluster?
My setup:
4 nodes: for non critical data (32GB ram) each. Can be the master node 3
3 nodes: for critial data (16GB ram) each.
Does the master nodes need the same memory requirement as the data nodes?
At a time you can have only one master node, but for availability you should have more than one master elegible by setting node.master
The master node is the only node in a cluster that can make changes to the cluster state. This mean that if your master node is rebooted or down then you will not be able to make any changes to your cluster.
Well at some point it is a bit hard to what is right or best practice because it always depends on many parameters.
With your setup i would better go with 3 nodes and up to 64 GB of memory per each node, other wise you are loosing some hits on communication between your 7 servers while they are not utilizing 100% of resources. Then all 3 nodes must be able to become master and set
discovery.zen.minimum_master_nodes: 2
This parameter is a bit important to avoid brain split when each node could become a master.
For you critical data you must use 1 replica to prevent lose of data.
Other option would be to make master only nodes and data only nodes.
So at some point minimum master nodes should be always 3 this will allow you to upgrade without downtime and make sure that you have always on setup.
Related
I read documentation, but unfortunately I still don't understand one thing. While creating AWS Elasticsearch domain, I need to choose "Number of nodes" in "Data nodes" section.
If i specify 3 data nodes and 3-AZ, what it actually means?
I have suggestions:
I'll get 3 nodes with their own storages (EBS). One of node is master and other 2 are replicas in different AZ. Just copy of master, not to lose data if master node become broken.
I'll get 3 nodes with their own storages (EBS). All of them will work independent and on their storadges are different data. So at the same time data can be processed by different nodes and store on different storages.
It looks like in other AZ's should be replicas. but then I don't understand why I have different values of free space on different nodes
Please, explain how it works.
Many thanks for any info or links.
I haven't used AWS Elasticsearch, but I've used the Cloud Elasticsearch service.
When you use 3 AZ (availability zones), means that your cluster will use 3 zones in order to make it resilient. If one zone has problems, then the nodes in that zone will have problems as well.
As the description section mentions, you need to specify multiples of 3 if you choose 3 AZ. If you have 3 nodes, then every AZ will have one zone. If one zone has problems, then that node is out, the two remaining will have to pick up from there.
Now in order to answer your question. What do you get with these configurations. You can check so yourself. Use this via kibana or any HTTP client
GET _nodes
Check for the sections:
nodes.roles
nodes.attributes
In the various documentations, blog posts etc you will see that for production usage, 3 nodes and 3 AZ is a good starting point in order to have a resilient production cluster.
So let's take it step by step:
You need an even number of master nodes in order to avoid the split brain problem.
You need more than one node in your cluster in order to make it resilient (if the node is unavailable).
By combining these two you have the minimum requirement of 3 nodes (no mention of zones yet).
But having one master and two data nodes, will not cut it. You need to have 3 master-eligible nodes. So if you have one node that is out, the other two can still form a quorum and vote a new master, so your cluster will be operational with two nodes. But in order for this to work, you need to set your primary shards and replica shards in a way that any two of your nodes can hold your entire data.
Examples (for simplicity we have only one index):
1 primary, 2 replicas. Every node holds one shard which is 100% of the data
3 primaries, 1 replica. Every node will hold one primary and one replica (33% primary, 33% replica). Two nodes combined (which is the minimum to form a quorum as well) will hold all your data (and some more)
You can have more combinations but you get the idea.
As you can see, the shard configuration needs to go along with your number and type of nodes (master-eligible, data only etc).
Now, if you add the availability zones, you take care of the problem of one zone being problematic. If your cluster was as a whole in one zone (3 nodes in one node), then if that zone was problematic then your whole cluster is out.
If you set up one master node and two data nodes (which are not master eligible), having 3 AZ (or 3 nodes even) doesn't do much for resiliency, since if the master goes out, your cluster cannot elect a new one and it will be out until a master node is up again. Now for the same setup if a data node goes out, then if you have your shards configured in a way that there is redundancy (meaning that the two nodes remaining have all the data if combined), then it will work fine.
Your answers should be covered in following three points.
If i specify 3 data nodes and 3-AZ, what it actually means?
This means that your data and replica's will be available in 3AZs with none of the replica in same AZ as the data node. Check this link. For example, When you say you want 2 data nodes in 2 AZ. DN1 will be saved in (let's say) AZ1 and it's replica will be stored in AZ2. DN2 will be in AZ2 and it's replica will be in AZ1.
It looks like in other AZ's should be replicas. but then I don't understand why I have different values of free space on different nodes
It is because when you give your AWS Elasticsearch some amount of storage, the cluster divides the specified storage space in all data nodes. If you specify 100G of storage on the cluster with 2 data nodes, it divides the storage space equally on all data nodes i.e. two data nodes with 50G of available storage space on each.
Sometime you will see more nodes than you specified on the cluster. It took me a while to understand this behaviour. The reason behind this is when you update these configs on AWS ES, it takes some time to stabilize the cluster. So if you see more data or master nodes as expected hold on for a while and wait for it to stabilize.
Thanks everyone for help. To understand how much space available/allocated, run next queries:
GET /_cat/allocation?v
GET /_cat/indices?v
GET /_cat/shards?v
So, if i create 3 nodes, than I create 3 different nodes with separated storages, they are not replicas. Some data is stored in one node, some data in another.
I know that it is possible to define more than one master for the ElasticSearch cluster, where only one acts as master and the others can step in if necessary. See also https://stackoverflow.com/a/15022820/2648551 .
What I don't understand is how I can determine which master is active and which could step in if necessary.
The following setting I currently have:
node-01: master (x) data(-)
node-02: master (-) data(x)
node-03: master (-) data(x)
node-04: master (-) data(x)
node-05: master (-) data(x)
node-06: master (-) data(x)
Now I want to determine that e.g. node-02 becomes additionally a master eligible. Can I rely on ES being so smart that it always takes the non-data node (node-01) as the active master, or could it be that node-02 ever acts as the active master if all nodes are present and there are no problems? Or is that something I just don't have to worry about?
I am currently using ElasticSearch 1.7 [sic!], but I am also interested in answers based on the latest versions.
A few laters and just for the context we "can" now decide which node becomes master, although not straight forward its possible.
Elasticsearch now has an method called voting_config_exclusions which can be used to move away from current master node e.g.
lets say you have 3 master-eligible nodes in your cluster
$ GET _cat/nodes?v
ip node.role master name
192.168.0.10 cdfhilmrstw - node-10
192.168.0.20 cdfhilmrstw * node-20
192.168.0.30 cdfhilmrstw - node-30
192.168.0.99 il - node-99
and Elasticsearch has selected node-20 as active master, you can run following call to remove the active node from voting.
POST /_cluster/voting_config_exclusions?node_names=node_name
This will randomly select another master-eligible node as master (if you have more than one left) Keep doing this for the active nodes until you get the right one activated as master.
Note: this doesn't remove the node, only makes it in-active master / non-voting node and allows another node to become active as master.
Once done, make sure to run below command to remove exclusions and allow all eligible nodes to become master if and when the selected node goes down.
DELETE /_cluster/voting_config_exclusions
Thank You
In short, no, you can't decide which of the master eligible nodes will become a master, because master node is elected (it was in ES 1.7, it still is in ES 6.2).
No, you can't rely on Elasticsearch being so smart to always take the non-data node as the active master. In fact, as of now (6.2) they advice to have dedicated master nodes (i.e. those that do not perform any data operations):
To ensure that your master node is stable and not under pressure, it
is a good idea in a bigger cluster to split the roles between
dedicated master-eligible nodes and dedicated data nodes.
... It is important
for the stability of the cluster that master-eligible nodes do as
little work as possible.
(Note that they are talking about a "bigger cluster".)
I can only assume that this also holds for the earlier versions and the documentation just got reacher.
There is a problem with the configuration that you have posted. Although you have many nodes, loss of one (the master node, node-01) will make your cluster non-functional. To avoid this situation you may choose one of these options:
use default strategy and make all nodes data nodes and master nodes;
make a set of dedicated master-only nodes (at least 3 of them).
It would be nice to know the reason why the ES defaults are not good enough for you, because usually they are good enough.
However, if this is the case when you need a dedicated master node, make sure you have at least 3 and that discovery.zen.minimum_master_nodes is enough to avoid the "split brain" situation:
discovery.zen.minimum_master_nodes = (master_eligible_nodes / 2) + 1
Hope that helps!
In my production environment, I have a two-node cluster (ES 2.2.0) and each node sits on a different physical box. Inside elasticsearch.yml, I have the following:
discovery.zen.minimum_master_nodes: 2
My question is: if one box is down, can the other node continues to function normally to provide uninterrupted search services (index and search, write and read)?
If you have two nodes and each is master-eligible and you have discovery.zen.minimum_master_nodes: 2, if the network goes down and the two nodes don't see each other for a while, you'll get into a split brain situation because each node will elect itself as a master.
However, with a setting of 2, you have two possible situations:
if the non-master goes down, the other node will continue to function properly (since it is already master)
if the master goes down, the other won't be able to elect itself as the master (since it will wait for a second master-eligible node to be visible).
For this reason, with only two nodes, you need to choose between the possibility of a split brain (with minimum_master_nodes: 1) or a potentially RED cluster (with minimum_master_nodes: 2). The best way to overcome this is to include a third master-only node and then minimum_master_nodes: 2 would make sense.
Just try it out:
Start your cluster, bring down the master node, what happens?
Start your cluster, bring down the non-master node, what happens?
The purpose of minimum master nodes is to maintain the stability of the cluster.
If you have only 2 nodes in cluster and with 2 minimum master nodes settings.
If you are setting minimum master as 2, the cluster will expect 2 nodes to be UP to serve the various search services.
If one node goes down in your 2 node cluster (which had 2 minimum master node settings), theoretically cluster will goes down.
First, this setting helps prevent split brains, the existence of two masters in a single cluster.
If you have two nodes, A setting of 1 will allow your cluster to function, but doesn’t protect against split brain. It is best to have a minimum of three nodes in situations.
3 node cluster of ElasticSearch 1.7.2 on CentOS
In a traditional cluster perspective, for a 3 node environment, the approach is to allow the failure of any one node, and the cluster will still be operational.
The default elasticsearch.yml reflects this, and all is well.
In our environment, 3 nodes, we want any one node to be able to stand alone and operate even if both other nodes are lost.
We believe the following achieves this:
index.number_of_replicas: 2 # in 3-node cluster, every node will have p or r copy of every shard
discovery.zen.minimum_master_nodes: 2 # reqd for 3 node env, but what happens when only 1 node survives?
Any additions or changes to the above appro?
We also have the three node cluster with all node being capable of becoming master. I guess apart from minimum master nodes, rest config remains the same as default. Just as a word of precaution, when the cluster has only one node working, then there are no replicas available on that node. Try not to have that situation in production when you are indexing data otherwise it takes good time to propogate all changes and reallocation of shards once other nodes are up if the data set is huge. Cheers.
The answer is:
index.number_of_replicas: 2
On a 3 node system, this means every node will have a replica of every shard, so any 1 node can stand alone/has all the data.
Distributed database are meant to be resistant to failures, but each node is not meant to be stand alone. It would be possible to setup ES such that each node has 100% of the data from each of the indexes but that would mean extra replicas and less shards. Both of those are going to lead to reduced performance from the cluster.
If you are really worried that 2 of your nodes will go down at the same time I suggest adding a 4th data node instead of setting it up so that the 3rd node is stand alone.
I was stumbled at this question that how many masters can be there in a three node cluster. I came across this point in one of a article on internet that search and index requests are not to be sent to elected master. Is that correct? So , if i have three nodes acting as master(out of which one node is elected master) should i point out incoming logs to be indexed and searched onto other master nodes apart from elected master?Please clarify.Thanks in advance
In a three node cluster, all nodes most likely hold data and are master-eligible. That is the most simple situation in which you don't have to worry about anything else.
If you have a larger cluster, you can have a couple of nodes which are configured as dedicated master nodes. That is, they are master-eligible and they don't hold any data. For example you would have 3 dedicated master nodes and 7 data nodes (not master-eligible). Exactly one of the dedicated master nodes will always be the elected master.
The point is that since the dedicated master nodes don't hold data, they will not directly service index and search request. If you send an index or search request to them there's no other way for them than to delegate to one of the 7 data nodes.
From the Elasticsearch Reference for Modules - Node:
dedicated master nodes are nodes with the settings node.data: false
and node.master: true. We actively promote the use of dedicated master
nodes in critical clusters to make sure that there are 3 dedicated
nodes whose only role is to be master, a lightweight operational
(cluster management) responsibility. By reducing the amount of
resource intensive work that these nodes do (in other words, do not
send index or search requests to these dedicated master nodes), we
greatly reduce the chance of cluster instability.
A related question is how many master nodes there should be in a cluster. The answer essentially is at least 3 in order to prevent split-brain (a situation when due to a network error, two masters are elected simultaneously).
The Elasticsearch Guide has a section on Minimum Master Nodes, an excerpt:
When you have a split brain, your cluster is at danger of losing data.
Because the master is considered the supreme ruler of the cluster, it
decides when new indices can be created, how shards are moved, and so
forth. If you have two masters, data integrity becomes perilous, since
you have two nodes that think they are in charge.
This setting tells Elasticsearch to not elect a master unless there
are enough master-eligible nodes available. Only then will an election
take place.
This setting should always be configured to a quorum (majority) of
your master-eligible nodes. A quorum is (number of master-eligible
nodes / 2) + 1. Here are some examples:
If you have ten regular nodes (can hold data, can become master), a
quorum is 6.
If you have three dedicated master nodes and a hundred data nodes, the quorum is 2, since you need to count only nodes that are master eligible.
If you have two regular nodes, you are in a conundrum. A quorum would be 2, but this means a loss of one node will
make your cluster inoperable. A setting of 1 will allow your cluster
to function, but doesn’t protect against split brain. It is best to
have a minimum of three nodes in situations like this.