Clustered elasticsearch setup (two master nodes) - elasticsearch

We are currently setting up an environment with two elasticsearch instances (clustered servers).
Since it's clustered, we need to make sure that data (indexes) are synched between the two instances.
We do not have the possibility to setup an additional (3rd) server/instance to act as the 'master'.
Therefore we have configured both instances as master and data nodes. So instance 1 is master & node and instance 2 is also master & node.
The synchronization works fine when both instances are up and running. But when one instance is down, the other instance keeps trying to connect with the instance that is down, which obviously fails because the instance is down. Therefore the node that is up is also not functioning anymore, because it can not connect to his 'master' node (which is the node that is down), even though the instance itself is also a 'master'.
The following errors are logged in this case:
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/2/no master];
org.elasticsearch.transport.ConnectTransportException: [xxxxx-xxxxx-2][xx.xx.xx.xx:9300] connect_exception
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: xx.xx.xx.xx/xx.xx.xx.xx:9300
In short: two elasticsearch master instances in a clustered setup. When one is down, the other one does not function because it can not connect to the 'master' instance.
Desired result: If one of the master instances is down, the other should continue functioning (without throwing errors).
Any recommendations on how to solve this, without having to setup an additional server that is the 'master' and the other 2 the 'slaves'?
Thanks

To be able to vote, masters must be a minimum of 2.
That's why you must have a minimum of 3 master nodes if you want your cluster to resist to the loss of one node.
You can just add a specialized small master node by settings all other roles to false.
This node can have very few resources .
As describe in this post :
https://discuss.elastic.co/t/master-node-resource-requirement/84609
Dedicated master nodes need persistent storage, but not a lot of it. 1-2 CPU cores and 2-4GB RAM is often sufficient for smaller deployments. As dedicated master nodes do not store data you can also set the heap to a higher percentage (75%-80%) of total RAM that is recommended for data nodes.

If there are no options to increase 1 more node then you can set
minimum_master_nodes=1 . This will let your es cluster up even if 1 node is up. But it may lead to split brain issue as we restricted only 1 node to be visible to form cluster.
In that scenario you have to restart the cluster to resolve split brain.
I would suggest you to upgrade to elasticsearch 7.0 or above. There you can live with two nodes each master eligible and split brain issue will not come.

You should not have 2 master eligible nodes in the cluster as its a very risky thing and can lead to split brain issue.
Master nodes doesn't require much resources, but as you have just two data nodes, you can still live without having a dedicated master nodes(but please aware that it has downsides) to just save the cost.
So simply, remove master role from another node and you should be good to go.

Related

Can I use the same flow.xml.gz for two different Nifi cluster?

We have a 13 nodes nifi cluster with around 50k processors. The size of the flow.xml.gz is around 300MB. To bring up the 13 nodes Nifi cluster, it usually takes 8-10 hours. Recently we split the cluster into two parts, 5nodes cluster and 8 nodes cluster with the same 300MB flow.xml.gz in both. Since then we are not able to get the Nifi up in both the clusters. Also we are not seeing any valid logs related to this issue. Is it okay to have the same flow.xml.gz . What are the best practices we could be missing here when splitting the Nifi Cluster.
You ask a number of questions that all boil down to "How to improve performance of our NiFi cluster with a very large flow.xml.gz".
Without a lot more details on your cluster and the flows in it, I can't give a definite or guaranteed-to-work answer, but I can point out some of the steps.
Splitting the cluster is no good without splitting the flow.
Yes, you will reduce cluster communications overhead somewhat, but you probably have a number of input processors that are set to "Primary Node only". If you load the same flow.xml.gz on two clusters, both will have a primary node executing these, leading to contention issues.
More importantly, since every node still loads all of the flow.xml.gz (probably 4 Gb unzipped), you don't have any other performance benefits and verifying the 50k processors in the flow at startup still takes ages.
How to split the cluster
Splitting the cluster in the way you did probably left references to nodes that are now in the other cluster, for example in the local state directory. For NiFi clustering, that may cause problems electing a new cluster coordinator and primary node, because a quorum can't be reached.
It would be cleaner to disconnect, offload and delete those nodes first from the cluster GUI so that these references are deleted. Those nodes can then be configured as a fresh cluster with an empty flow. Even if you use the old flow again later, test it out with an empty flow to make it a lot quicker.
Since you already split the cluster, I would try to start one node of the 8 member cluster and see if you can access the cluster menu to delete the split-off nodes (disconnecting and offloading probably doesn't work anymore). Then for the other 7 members of the cluster, delete the flow.xml.gz and start them. They should copy over the flow from the running node. You should adjust the number of candidates expected in nifi.properties (nifi.cluster.flow.election.max.candidates) so that is not larger than the number of nodes to slightly speed up this process.
If successful, you then have the 300 MB flow running on the 8 member cluster and an empty flow on the new 5 member cluster.
Connect the new cluster to your development pipeline (NiFi registry, templates or otherwise). Then you can stop process groups on the 8 member cluster, import them on the new and after verifying that the flows are running on the new cluster, delete the process group from the old, slowly shrinking it.
If you have no pipeline or it's too much work to recreate all the controllers and parameter contexts, you could take a copy of the flow.xml.gz to one new node, start only that node and delete all the stuff you don't need. Only after that should you start the others (with their empty flow.xml.gz) again.
For more expert advice, you should also try the Apache NiFi Users email list. If you supply enough relevant details in your question, someone there may know what is going wrong with your cluster.

Can I set active master in an Elasticsearch cluster manually?

I know that it is possible to define more than one master for the ElasticSearch cluster, where only one acts as master and the others can step in if necessary. See also https://stackoverflow.com/a/15022820/2648551 .
What I don't understand is how I can determine which master is active and which could step in if necessary.
The following setting I currently have:
node-01: master (x) data(-)
node-02: master (-) data(x)
node-03: master (-) data(x)
node-04: master (-) data(x)
node-05: master (-) data(x)
node-06: master (-) data(x)
Now I want to determine that e.g. node-02 becomes additionally a master eligible. Can I rely on ES being so smart that it always takes the non-data node (node-01) as the active master, or could it be that node-02 ever acts as the active master if all nodes are present and there are no problems? Or is that something I just don't have to worry about?
I am currently using ElasticSearch 1.7 [sic!], but I am also interested in answers based on the latest versions.
A few laters and just for the context we "can" now decide which node becomes master, although not straight forward its possible.
Elasticsearch now has an method called voting_config_exclusions which can be used to move away from current master node e.g.
lets say you have 3 master-eligible nodes in your cluster
$ GET _cat/nodes?v
ip node.role master name
192.168.0.10 cdfhilmrstw - node-10
192.168.0.20 cdfhilmrstw * node-20
192.168.0.30 cdfhilmrstw - node-30
192.168.0.99 il - node-99
and Elasticsearch has selected node-20 as active master, you can run following call to remove the active node from voting.
POST /_cluster/voting_config_exclusions?node_names=node_name
This will randomly select another master-eligible node as master (if you have more than one left) Keep doing this for the active nodes until you get the right one activated as master.
Note: this doesn't remove the node, only makes it in-active master / non-voting node and allows another node to become active as master.
Once done, make sure to run below command to remove exclusions and allow all eligible nodes to become master if and when the selected node goes down.
DELETE /_cluster/voting_config_exclusions
Thank You
In short, no, you can't decide which of the master eligible nodes will become a master, because master node is elected (it was in ES 1.7, it still is in ES 6.2).
No, you can't rely on Elasticsearch being so smart to always take the non-data node as the active master. In fact, as of now (6.2) they advice to have dedicated master nodes (i.e. those that do not perform any data operations):
To ensure that your master node is stable and not under pressure, it
is a good idea in a bigger cluster to split the roles between
dedicated master-eligible nodes and dedicated data nodes.
... It is important
for the stability of the cluster that master-eligible nodes do as
little work as possible.
(Note that they are talking about a "bigger cluster".)
I can only assume that this also holds for the earlier versions and the documentation just got reacher.
There is a problem with the configuration that you have posted. Although you have many nodes, loss of one (the master node, node-01) will make your cluster non-functional. To avoid this situation you may choose one of these options:
use default strategy and make all nodes data nodes and master nodes;
make a set of dedicated master-only nodes (at least 3 of them).
It would be nice to know the reason why the ES defaults are not good enough for you, because usually they are good enough.
However, if this is the case when you need a dedicated master node, make sure you have at least 3 and that discovery.zen.minimum_master_nodes is enough to avoid the "split brain" situation:
discovery.zen.minimum_master_nodes = (master_eligible_nodes / 2) + 1
Hope that helps!

About elasticsearch cluster

I need to provide many elasticSearch instances for different clients but hosted in my infrastructre.
For the moment it is only some small instances.
I am wondering if it is not better to build a big ElastSearch Cluster with 3-5 servers to handle all instances and then each client gets a different index in this cluster and each instance is distributed over servers.
Or maybe another idea?
And another question is about quorum, what is the quorum for ES please?
thanks,
You don’t have to assign each client to different index, Elasticsearch cluster will automatically share loading among all nodes which share shards.
If you are not sure how many nodes are needed, start from a small cluster then keep monitoring the health status of cluster. Add more nodes to the cluster if server loading is high; remove nodes if server loading is low.
When the cluster continuously grow, you may need to assign a dedicated role to each node. In this way, you will have more control over the cluster, easier to diagnose the problem and plan resources. For example, adding more master nodes to stabilize the cluster, adding more data nodes to increase searching and indexing performance, adding more coordinate nodes to handle client requests.
A quorum is defined as majority of eligible master nodes in cluster as follows:
(master_eligible_nodes / 2) + 1

Elasticsearch minimum master nodes

I have a 3 node cluster with minimum_master_nodes set to 2. If I shut down all nodes except the master, leaving one node online, the cluster is no longer operational.
Is this by design? It seems like the node that was the master shouldd remain operational, instead I get errors like this:
{"error":"MasterNotDiscoveredException[waited for [30s]]","status":503}
All the other settings are stock and I am using the aws cloud plugin.
Yes, this is intentional.
Split brain
Imagine a situation where the other 2 nodes were still running but couldn't communicate to the the third node - you'd end up with two clusters otherwise known as a "split brain".
As the two clusters could be updating and deleting data independently of each other then recovery would be very difficult - you wouldn't have a single source of truth for the data.
By setting minimum_master_nodes to (n/2)+1 (were n is the number of nodes) you can prevent a split brain.
Single Node
If you know that the first two nodes have definitely died and not coming back - you can set the minimum_master_nodesto 1 on the remaining node (and also set to one on the other nodes before you restart them).
There is also an option no master block that lets you control what happens when you don't have a valid cluster - e.g. you could make the remaining node read-only until the cluster is re-established.

RabbitMQ cluster is not reconnecting after network failure

I have a RabbitMQ cluster with two nodes in production and the cluster is breaking with these error messages:
=ERROR REPORT==== 23-Dec-2011::04:21:34 ===
** Node rabbit#rabbitmq02 not responding **
** Removing (timedout) connection **
=INFO REPORT==== 23-Dec-2011::04:21:35 ===
node rabbit#rabbitmq02 lost 'rabbit'
=ERROR REPORT==== 23-Dec-2011::04:21:49 ===
Mnesia(rabbit#rabbitmq01): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, rabbit#rabbitmq02}
I tried to simulate the problem by killing the connection between the two nodes using "tcpkill". The cluster has disconnected, and surprisingly the two nodes are not trying to reconnect!
When the cluster breaks, HAProxy load balancer still marks both nodes as active and send requests to both of them, although they are not in a cluster.
My questions:
If the nodes are configured to work as a cluster, when I get a network failure, why aren't they trying to reconnect afterwards?
How can I identify broken cluster and shutdown one of the nodes? I have consistency problems when working with the two nodes separately.
RabbitMQ Clusters do not work well on unreliable networks (part of RabbitMQ documentation). So when the network failure happens (in a two node cluster) each node thinks that it is the master and the only node in the cluster. Two master nodes don't automatically reconnect, because their states are not automatically synchronized (even in case of a RabbitMQ slave - the actual message synchronization does not happen - the slave just "catches up" as messages get consumed from the queue and more messages get added).
To detect whether you have a broken cluster, run the command:
rabbitmqctl cluster_status
on each of the nodes that form part of the cluster. If the cluster is broken then you'll only see one node. Something like:
Cluster status of node rabbit#rabbitmq1 ...
[{nodes,[{disc,[rabbit#rabbitmq1]}]},{running_nodes,[rabbit#rabbitmq1]}]
...done.
In such cases, you'll need to run the following set of commands on one of the nodes that formed part of the original cluster (so that it joins the other master node (say rabbitmq1) in the cluster as a slave):
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit#rabbitmq1
rabbitmqctl start_app
Finally check the cluster status again .. this time you should see both the nodes.
Note: If you have the RabbitMQ nodes in an HA configuration using a Virtual IP (and the clients are connecting to RabbitMQ using this virtual IP), then the node that should be made the master should be the one that has the Virtual IP.
From RabbitMQ doc: Clustering and Network Partitions
RabbitMQ also three ways to deal with network partitions automatically: pause-minority mode, pause-if-all-down mode and autoheal mode. The default behaviour is referred to as ignore mode.
In pause-minority mode RabbitMQ will automatically pause cluster nodes which determine themselves to be in a minority (i.e. fewer or equal than half the total number of nodes) after seeing other nodes go down. It therefore chooses partition tolerance over availability from the CAP theorem. This ensures that in the event of a network partition, at most the nodes in a single partition will continue to run. The minority nodes will pause as soon as a partition starts, and will start again when the partition ends. This configuration prevents split-brain and is therefore able to automatically recover from network partitions without inconsistencies.
In pause-if-all-down mode, RabbitMQ will automatically pause cluster nodes which cannot reach any of the listed nodes. In other words, all the listed nodes must be down for RabbitMQ to pause a cluster node. This is close to the pause-minority mode, however, it allows an administrator to decide which nodes to prefer, instead of relying on the context. For instance, if the cluster is made of two nodes in rack A and two nodes in rack B, and the link between racks is lost, pause-minority mode will pause all nodes. In pause-if-all-down mode, if the administrator listed the two nodes in rack A, only nodes in rack B will pause. Note that it is possible the listed nodes get split across both sides of a partition: in this situation, no node will pause. That is why there is an additional ignore/autoheal argument to indicate how to recover from the partition.
In autoheal mode RabbitMQ will automatically decide on a winning partition if a partition is deemed to have occurred, and will restart all nodes that are not in the winning partition. Unlike pause_minority mode it therefore takes effect when a partition ends, rather than when one starts.
The winning partition is the one which has the most clients connected (or if this produces a draw, the one with the most nodes; and if that still produces a draw then one of the partitions is chosen in an unspecified way).
You can enable either mode by setting the configuration parameter cluster_partition_handling for the rabbit application in the configuration file to:
autoheal
pause_minority
pause_if_all_down
If using the pause_if_all_down mode, additional parameters are required:
nodes: nodes which should be unavailable to pause
recover: recover action, can be ignore or autoheal
...
Which Mode to Pick?
It's important to understand that allowing RabbitMQ to deal with network partitions automatically comes with trade offs.
As stated in the introduction, to connect RabbitMQ clusters over generally unreliable links, prefer Federation or the Shovel.
With that said, here are some guidelines to help the operator determine which mode may or may not be appropriate:
ignore: use when network reliability is the highest practically possible and node availability is of topmost importance. For example, all cluster nodes can be in the same a rack or equivalent, connected with a switch, and that switch is also the route to the outside world.
pause_minority: appropriate when clustering across racks or availability zones in a single region, and the probability of losing a majority of nodes (zones) at once is considered to be very low. This mode trades off some availability for the ability to automatically recover if/when the lost node(s) come back.
autoheal: appropriate when are more concerned with continuity of service than with data consistency across nodes.
One other way to recover from this kind of failure is to work with Mnesia which is the database that RabbitMQ uses as the persistence mechanism and for the synchronization of the RabbitMQ instances (and the master / slave status) are controlled by this. For all the details, refer to the following URL: http://www.erlang.org/doc/apps/mnesia/Mnesia_chap7.html
Adding the relevant section here:
There are several occasions when Mnesia may detect that the network
has been partitioned due to a communication failure.
One is when Mnesia already is up and running and the Erlang nodes gain
contact again. Then Mnesia will try to contact Mnesia on the other
node to see if it also thinks that the network has been partitioned
for a while. If Mnesia on both nodes has logged mnesia_down entries
from each other, Mnesia generates a system event, called
{inconsistent_database, running_partitioned_network, Node} which is
sent to Mnesia's event handler and other possible subscribers. The
default event handler reports an error to the error logger.
Another occasion when Mnesia may detect that the network has been
partitioned due to a communication failure, is at start-up. If Mnesia
detects that both the local node and another node received mnesia_down
from each other it generates a {inconsistent_database,
starting_partitioned_network, Node} system event and acts as described
above.
If the application detects that there has been a communication failure
which may have caused an inconsistent database, it may use the
function mnesia:set_master_nodes(Tab, Nodes) to pinpoint from which
nodes each table may be loaded.
At start-up Mnesia's normal table load algorithm will be bypassed and
the table will be loaded from one of the master nodes defined for the
table, regardless of potential mnesia_down entries in the log. The
Nodes may only contain nodes where the table has a replica and if it
is empty, the master node recovery mechanism for the particular table
will be reset and the normal load mechanism will be used when next
restarting.
The function mnesia:set_master_nodes(Nodes) sets master nodes for all
tables. For each table it will determine its replica nodes and invoke
mnesia:set_master_nodes(Tab, TabNodes) with those replica nodes that
are included in the Nodes list (i.e. TabNodes is the intersection of
Nodes and the replica nodes of the table). If the intersection is
empty the master node recovery mechanism for the particular table will
be reset and the normal load mechanism will be used at next restart.
The functions mnesia:system_info(master_node_tables) and
mnesia:table_info(Tab, master_nodes) may be used to obtain information
about the potential master nodes.
Determining which data to keep after communication failure is outside
the scope of Mnesia. One approach would be to determine which "island"
contains a majority of the nodes. Using the {majority,true} option for
critical tables can be a way of ensuring that nodes that are not part
of a "majority island" are not able to update those tables. Note that
this constitutes a reduction in service on the minority nodes. This
would be a tradeoff in favour of higher consistency guarantees.
The function mnesia:force_load_table(Tab) may be used to force load
the table regardless of which table load mechanism is activated.
This is a more lengthy and involved way of recovering from such failures .. but will give better granularity and control over data that should be available in the final master node (this can reduce the amount of data loss that might happen when "merging" RabbitMQ masters).

Resources