Leader Election Algorithm - algorithm

I am exploring various architectures in cluster computing. Some of the popular ones are:
Master-Slave.
RPC
...
In Master-slave, the normal way is to set one machine as master & a bunch of machines as slaves controlled by master. One particular algo here got me interested. It's called Leader-Election Algo which has a certain randomness in selecting which of the machines will become master.
My question is - Why would anyone want to elect a master machine this way? What advantages does this approach have compared to manually selecting a machine as master?

There are some advantages with this algorithms:
Selection of node as leader will be
done dynamically so for example you
can select node with highest
performance, and arrival of new
nodes may be makes better choice.
Another good approach by dynamically
selecting leader is, if one of a
nodes have major fault (for example
PC is shutting down) you have other
choices and there is no need to
manually change the leader.
if you manually select node should
manually configure all other nodes
to use this node, and also set their
time manually, ... but this
algorithms will help you to handle
timing issues.
for example (not very relevant) why
in most cases using DHCP? too many
configs will be handeled by this
algorithms.

Main idea of using such algorithms is to get rid of additional configuration, add some kind of flexibility, and stability of the whole system. But usually (in HPC/MPI applications) master node is selected manually.
Suppose your master selection algorithms is quite easy - get the list of available systems and select the one with the highest IP address. In this case you can easily start a new process on any of your nodes and it will automatically find the master node.
One nice example of such ideas is the WCCP protocol "designated proxy" selection algorithm where the number of proxies could be flexible and master node is selected in the runtime.

Considering a network of nodes, where it is vital to have one leader node at all times. If the current leader dies, then the network some how has to choose another leader. Given this scenario and requirement there are two possible ways to do it.
The central system approach, where there is a central node
deciding who will be the leader. If
the current leader dies, then this
central node will decide on who
should take over the leader role.
But this is single point of failure,
that is the central node who is
responsible for deciding the leader,
goes down then there is no one there to select leaders if the current leader dies.
Where as in the same scenario we can
use distributed leader selection, as
in all the nodes come to a consensus
who the leader should be. So we do not need to have a central node who decides on who the leader should be, hence eliminating the single point of failure. When the leader node dies, then there will be a way to detect node failure, and then every node will start a distributed leader selection algorithm, and mutually come to a consensus of electing a leader.
So, in short when you have a system which has no central control, probably because the system is meant to be scalable without having single point of failure, in those systems to take choose some node, leader elections algorithms are used.

Related

How do I set failover on my netapp clusters?

I have two clusters of NetApp (main and dr), in each I have two nodes.
If one of the nodes in either cluster goes down, the other node kicks in and act as one node cluster.
Now my question is, what happens when a whole cluster falls down due to problems of power supply?
I've heard about "Metro Cluster" but I want to ask if there is another option to do so.
It depends on what RPO you need. Metrocluster does synchronous replication of every write and thus provides zero RPO (data loss)
On the other hand you could use Snapmirror which basically takes periodic snapshots and stores them on the other cluster. As you can imagine you should expect some data loss.

What's the benefit of advanced master election algorithms over bully algorithm?

I read how current master election algorithms like Raft, Paxos or Zab elect master on a cluster and couldn't understand why they use sophisticated algorithms instead of simple bully algorithm.
I'm developing a cluster library and use UDP Multicast for heartbeat messages. Each node joins a multicast address and also send datagram packets periodically to that address. If the nodes find out there is a new node that sends packets to this multicast address, the node is simply added to cluster and similarly when the nodes in the cluster don't get any package from a node, they remove it from the cluster. When I need to choose a master node, I simply iterate over the nodes in the cluster and choose the oldest one.
I read some articles that implies this approach is not effective and more sophisticated algorithms like Paxos should be used in order to elect a master or detect failures via heartbeat messages. I couldn't understand why Paxos is better for split-brain scenarios or other network failures than traditional bully algorithm because I can easily find out when quorum of nodes leave from the cluster without using Raft. The only benefit I see is the count of packets that each server have to handle; only master sends heartbeat messages in Raft while in this case each node has to send heartbeat message to each other. However I don't think this is a problem since I can simply implement similar heartbeat algorithm without changing my master election algorithm.
Can someone elaborate on that?
From a theoretical perspective, Raft, Paxos and Zab are not leader election algorithms. They solve a different problem called consensus.
In your concrete scenario, the difference is the following. With a leader election algorithm, you can only guarantee that eventually one node is a leader. That means that during a period of time, multiple nodes might think they are the leader and, consequently, might act like one. In contrast, with the consensus algorithms above, you can guarantee that there is at most one leader in a logical time instant.
The consequence is this. If the safety of the system depends on the presence of a single leader, you might get in trouble relying only on a leader election. Consider an example. Nodes receive messages from the UDP multicast and do A if the sender is the leader, but do B if the sender is not the leader. If two nodes check for the oldest node in the cluster at slightly different points in time, they might see different leaders. These two nodes might then receive a multicast message and process it in different manners, possibly violating some safety property of the system you'd like to hold (eg, that all nodes either do A or B, but never one does A and another does B).
With Raft, Paxos, and Zab, you can overcome that problem since these algorithms create sort of logical epochs, having at most one leader in each of them.
Two notes here. First, the bully algorithm is defined for synchronous systems. If you really implement it as described in the paper by Garcia-Molina, I believe you might experience problems in your partially synchronous system. Second, the Zab algorithm relies on a sort of bully algorithm for asynchronous systems. The leader is elected by comparing the length of their histories (that minimizes the recovery time of the system). Once the leader is elected, it tries to start the Zab protocol, which in turn locks the epoch for the leader.

RabbitMQ cluster is not reconnecting after network failure

I have a RabbitMQ cluster with two nodes in production and the cluster is breaking with these error messages:
=ERROR REPORT==== 23-Dec-2011::04:21:34 ===
** Node rabbit#rabbitmq02 not responding **
** Removing (timedout) connection **
=INFO REPORT==== 23-Dec-2011::04:21:35 ===
node rabbit#rabbitmq02 lost 'rabbit'
=ERROR REPORT==== 23-Dec-2011::04:21:49 ===
Mnesia(rabbit#rabbitmq01): ** ERROR ** mnesia_event got {inconsistent_database, running_partitioned_network, rabbit#rabbitmq02}
I tried to simulate the problem by killing the connection between the two nodes using "tcpkill". The cluster has disconnected, and surprisingly the two nodes are not trying to reconnect!
When the cluster breaks, HAProxy load balancer still marks both nodes as active and send requests to both of them, although they are not in a cluster.
My questions:
If the nodes are configured to work as a cluster, when I get a network failure, why aren't they trying to reconnect afterwards?
How can I identify broken cluster and shutdown one of the nodes? I have consistency problems when working with the two nodes separately.
RabbitMQ Clusters do not work well on unreliable networks (part of RabbitMQ documentation). So when the network failure happens (in a two node cluster) each node thinks that it is the master and the only node in the cluster. Two master nodes don't automatically reconnect, because their states are not automatically synchronized (even in case of a RabbitMQ slave - the actual message synchronization does not happen - the slave just "catches up" as messages get consumed from the queue and more messages get added).
To detect whether you have a broken cluster, run the command:
rabbitmqctl cluster_status
on each of the nodes that form part of the cluster. If the cluster is broken then you'll only see one node. Something like:
Cluster status of node rabbit#rabbitmq1 ...
[{nodes,[{disc,[rabbit#rabbitmq1]}]},{running_nodes,[rabbit#rabbitmq1]}]
...done.
In such cases, you'll need to run the following set of commands on one of the nodes that formed part of the original cluster (so that it joins the other master node (say rabbitmq1) in the cluster as a slave):
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster rabbit#rabbitmq1
rabbitmqctl start_app
Finally check the cluster status again .. this time you should see both the nodes.
Note: If you have the RabbitMQ nodes in an HA configuration using a Virtual IP (and the clients are connecting to RabbitMQ using this virtual IP), then the node that should be made the master should be the one that has the Virtual IP.
From RabbitMQ doc: Clustering and Network Partitions
RabbitMQ also three ways to deal with network partitions automatically: pause-minority mode, pause-if-all-down mode and autoheal mode. The default behaviour is referred to as ignore mode.
In pause-minority mode RabbitMQ will automatically pause cluster nodes which determine themselves to be in a minority (i.e. fewer or equal than half the total number of nodes) after seeing other nodes go down. It therefore chooses partition tolerance over availability from the CAP theorem. This ensures that in the event of a network partition, at most the nodes in a single partition will continue to run. The minority nodes will pause as soon as a partition starts, and will start again when the partition ends. This configuration prevents split-brain and is therefore able to automatically recover from network partitions without inconsistencies.
In pause-if-all-down mode, RabbitMQ will automatically pause cluster nodes which cannot reach any of the listed nodes. In other words, all the listed nodes must be down for RabbitMQ to pause a cluster node. This is close to the pause-minority mode, however, it allows an administrator to decide which nodes to prefer, instead of relying on the context. For instance, if the cluster is made of two nodes in rack A and two nodes in rack B, and the link between racks is lost, pause-minority mode will pause all nodes. In pause-if-all-down mode, if the administrator listed the two nodes in rack A, only nodes in rack B will pause. Note that it is possible the listed nodes get split across both sides of a partition: in this situation, no node will pause. That is why there is an additional ignore/autoheal argument to indicate how to recover from the partition.
In autoheal mode RabbitMQ will automatically decide on a winning partition if a partition is deemed to have occurred, and will restart all nodes that are not in the winning partition. Unlike pause_minority mode it therefore takes effect when a partition ends, rather than when one starts.
The winning partition is the one which has the most clients connected (or if this produces a draw, the one with the most nodes; and if that still produces a draw then one of the partitions is chosen in an unspecified way).
You can enable either mode by setting the configuration parameter cluster_partition_handling for the rabbit application in the configuration file to:
autoheal
pause_minority
pause_if_all_down
If using the pause_if_all_down mode, additional parameters are required:
nodes: nodes which should be unavailable to pause
recover: recover action, can be ignore or autoheal
...
Which Mode to Pick?
It's important to understand that allowing RabbitMQ to deal with network partitions automatically comes with trade offs.
As stated in the introduction, to connect RabbitMQ clusters over generally unreliable links, prefer Federation or the Shovel.
With that said, here are some guidelines to help the operator determine which mode may or may not be appropriate:
ignore: use when network reliability is the highest practically possible and node availability is of topmost importance. For example, all cluster nodes can be in the same a rack or equivalent, connected with a switch, and that switch is also the route to the outside world.
pause_minority: appropriate when clustering across racks or availability zones in a single region, and the probability of losing a majority of nodes (zones) at once is considered to be very low. This mode trades off some availability for the ability to automatically recover if/when the lost node(s) come back.
autoheal: appropriate when are more concerned with continuity of service than with data consistency across nodes.
One other way to recover from this kind of failure is to work with Mnesia which is the database that RabbitMQ uses as the persistence mechanism and for the synchronization of the RabbitMQ instances (and the master / slave status) are controlled by this. For all the details, refer to the following URL: http://www.erlang.org/doc/apps/mnesia/Mnesia_chap7.html
Adding the relevant section here:
There are several occasions when Mnesia may detect that the network
has been partitioned due to a communication failure.
One is when Mnesia already is up and running and the Erlang nodes gain
contact again. Then Mnesia will try to contact Mnesia on the other
node to see if it also thinks that the network has been partitioned
for a while. If Mnesia on both nodes has logged mnesia_down entries
from each other, Mnesia generates a system event, called
{inconsistent_database, running_partitioned_network, Node} which is
sent to Mnesia's event handler and other possible subscribers. The
default event handler reports an error to the error logger.
Another occasion when Mnesia may detect that the network has been
partitioned due to a communication failure, is at start-up. If Mnesia
detects that both the local node and another node received mnesia_down
from each other it generates a {inconsistent_database,
starting_partitioned_network, Node} system event and acts as described
above.
If the application detects that there has been a communication failure
which may have caused an inconsistent database, it may use the
function mnesia:set_master_nodes(Tab, Nodes) to pinpoint from which
nodes each table may be loaded.
At start-up Mnesia's normal table load algorithm will be bypassed and
the table will be loaded from one of the master nodes defined for the
table, regardless of potential mnesia_down entries in the log. The
Nodes may only contain nodes where the table has a replica and if it
is empty, the master node recovery mechanism for the particular table
will be reset and the normal load mechanism will be used when next
restarting.
The function mnesia:set_master_nodes(Nodes) sets master nodes for all
tables. For each table it will determine its replica nodes and invoke
mnesia:set_master_nodes(Tab, TabNodes) with those replica nodes that
are included in the Nodes list (i.e. TabNodes is the intersection of
Nodes and the replica nodes of the table). If the intersection is
empty the master node recovery mechanism for the particular table will
be reset and the normal load mechanism will be used at next restart.
The functions mnesia:system_info(master_node_tables) and
mnesia:table_info(Tab, master_nodes) may be used to obtain information
about the potential master nodes.
Determining which data to keep after communication failure is outside
the scope of Mnesia. One approach would be to determine which "island"
contains a majority of the nodes. Using the {majority,true} option for
critical tables can be a way of ensuring that nodes that are not part
of a "majority island" are not able to update those tables. Note that
this constitutes a reduction in service on the minority nodes. This
would be a tradeoff in favour of higher consistency guarantees.
The function mnesia:force_load_table(Tab) may be used to force load
the table regardless of which table load mechanism is activated.
This is a more lengthy and involved way of recovering from such failures .. but will give better granularity and control over data that should be available in the final master node (this can reduce the amount of data loss that might happen when "merging" RabbitMQ masters).

HA Minimal Cluster for 5- Nines

I am trying to find out if 3 node HA cluster is common practice? Most of the references on Google point to 2 node cluster. But i not able to convince myself that an application that require 5 Nine's, can implement 2 node HA cluster on commodity hardware.
The reason behind it is simple. If a machine on which one node goes offline, then there will be only one node left without any back up.
To reduce dependency on node that went offline, i think a 3 node cluster is a min requirement.
In order to give a factual answer, much more data would be required.
But from an anecdotal perspective, two nodes of commodity hardware are not nearly enough to give you five-nines with any level of reliability (or at least sleep-at-night comfort).
Most cluster diagrams are likely drawn with only two nodes for ease of explanation, "If A fails, B keeps working".
Given your five-nines however, and "commodity hardware", I would consider more than three as a requirement; perhaps as many as five or more.
Remember to allow for network, power and perhaps even geographical diversity if you are really after that kind of reliability.

What cluster node should be active?

There is some cluster and there is some unix network daemon. This daemon is started on each cluster node, but only one can be active.
When active daemon breaks (whether program breaks of node breaks), other node should become active.
I could think of few possible algorithms, but I think there is some already done research on this and some ready-to-go algorithms? Am I right? Can you point me to the answer?
Thanks.
Jgroups is a Java network stack which includes DistributedLockManager type of support and cluster voting capabilities. These allow any number of unix daemons to agree on who should be active. All of the nodes could be trying to obtain a lock (for example) and only one will succeed until the application or the node fails.
Jgroups also have the concept of the coordinator of a specific communication channel. Only one node can be coordinator at one time and when a node fails, another node becomes coordinator. It is simple to test to see if you are the coordinator in which case you would be active.
See: http://www.jgroups.org/javadoc/org/jgroups/blocks/DistributedLockManager.html
If you are going to implement this yourself there is a bunch of stuff to keep in mind:
Each node needs to have a consistent view of the cluster.
All nodes will need to inform all of the rest of the nodes that they are online -- maybe with multicast.
Nodes that go offline (because of ap or node failure) will need to be removed from all other nodes' "view".
You can then have the node with the lowest IP or something be the active node.
If this isn't appropriate then you will need to have some sort of voting exchange so the nodes can agree who is active. Something like: http://en.wikipedia.org/wiki/Two-phase_commit_protocol

Resources