I would like to know that if deploy two ODL platform in High Availability mode which shares same DataBroker is possible or not?
If it is, which path should I follow? Thanks.
Yes, this is possible in Carbon. The clustering setup is fully documented here; the short version is:
install OpenDaylight on all nodes you want to run in the cluster
on each node in turn, run
bin/configure_cluster.sh 2 192.168.0.1 192.168.0.2 192.168.0.3
with the node number instead of 2 (so 1 on the first node, 2 on the second node, etc.), and all the nodes’ IP addresses instead of 192.168.0.1 192.168.0.2 192.168.0.3;
start Karaf on each node, and run
feature:install odl-mdsal-clustering
on each one in turn.
Related
I would like to setup an IOT reporting database, with failover. The idea is to have a cluster with 2 nodes, one in a datacenter one at home.
If "home" looses internet connection, it continues to operate and upon
online status, "datacenter" would align to offline changes.
Now, I read the rethinkdb docs, that you need at least 3 nodes for a failover to function.
So the question is, is my scenario doable with just 2 nodes, if yes, how?
According to the docu https://www.rethinkdb.com/docs/start-a-server/
First, start RethinkDB on the first machine:
$ rethinkdb --bind all
Then start RethinkDB on the second machine:
$ rethinkdb --join IP_OF_FIRST_MACHINE:29015 --bind all
i am a new user in MQTT,
and would to create a vernemq cluster of three nodes , how can i do this ? (with mosquitto client) please .
I have try to do it with the bridge in two distinct VM on ubuntu 18 but i haven't succes .
First you need to have 3 running VerneMQ nodes. Then you'll join one node to the other like this:
vmq-admin cluster join discovery-node=<OtherClusterNode>
Then you check the cluster state (you should see a 2 node cluster):
vmq-admin cluster show
Then you repeat the first command and join the 3rd node to the cluster (the discovery-node can be any node in the existing cluster).
Note: your VerneMQ nodes need to be configured correctly, namely with regard to configured listeners and ports. See here:
https://vernemq.com/docs/clustering/communication.html
If you use cloud VMs/Docker or similar, make sure you configure access accordingly.
You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a make option that let's you build multiple releases, as a commodity. This will prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc.
➜ default git:(master) ✗ make dev1 dev2 dev3
This will prepare different vernemq.conf files in the respective release builds. (look at them in the _build directory after having built the releases.)
You can then start the respective broker instances in 3 terminal windows.
Hope this helps.
EDIT: can't comment yet, so had to add this as an answer.
I'm trying to bring up a consul cluster for production purposes. I didn't find much information about best practices for deploying a consul cluster. Let's say I wanna have a cluster with 3 nodes. I'm wondering what's the difference between the following scenarios and which one is preferred.
running consul agent -server -data-dir /tmp/consul on each node.
running consul agent -server -data-dir /tmp/consul --bootstrap on
only the first node.
running consul agent -server -data-dir
/tmp/consul --bootstrap-expect 1 on each node or only the first node?
running consul agent -server -data-dir /tmp/consul
--bootstrap-expect 3 on each node or only the first node?
Having done this initial step, then how should I cluster all 3 nodes together? Should I run consul join <ip_node_1> <ip_node_2> <ip_node_3> on each node or the fist node only?
If I wanna run the consul agent in docker containers, is it a good practice to mount -data-dir directory as a volume in host box?
I believe your bullet #4 with -bootstrap-expect on each node is the preferred method.
From the Consul bootstrapping documentation:
The recommended way to bootstrap is to use the -bootstrap-expect configuration option. This option informs Consul of the expected number of server nodes and automatically bootstraps when that many servers are available. To prevent inconsistencies and split-brain situations (that is, clusters where multiple servers consider themselves leader), all servers should either specify the same value for -bootstrap-expect or specify no value at all. Only servers that specify a value will attempt to bootstrap the cluster.
and your bullet #2 is discouraged based on docs for -bootstrap-expect indicating that describe the -bootstrap flag as "legacy"
As for joining, I am using the auto-join feature of the Atlas integration so I don't need to manually join or specify the node IPs.
This forum Q&A also helped to confirm this approach and provide some detail on what happens when -bootstrap-expect is used.
I'm in this situation in which I got two masters and four slaves in mesos. All of them are running fine. But when I'm trying to access marathon I'm getting the 'Could not determine the current leader' error. I got marathon in both masters (117 and 115).
This is basically what I'm running to get marathon up:
java -jar ./bin/../target/marathon-assembly-0.11.0-SNAPSHOT.jar --master 172.16.50.117:5050 --zk zk://172.16.50.115:2181,172.16.50.117:2181/marathon
Could anyone shed some light over this?
First, I would double-check that you're able to talk to Zookeeper from the Marathon hosts.
Next, there are a few related points to be aware of:
Per the Zookeeper administrator's guide (http://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_zkMulitServerSetup) you should have an odd number of Zookeeper instances for HA. A cluster size of two is almost certainly going to turn out badly.
For a highly available Mesos cluster, you should run an odd number of masters and also make sure to set the --quorum flag appropriately based on that number. See the details of how to set the --quorum flag (and why it's important) in the operational guide on the Apache Mesos website here: http://mesos.apache.org/documentation/latest/operational-guide
In a highly-available Mesos cluster (#masters > 1) you should let both the Mesos agents and the frameworks discover the leading master using Zookeeper. This lets them rediscover the leading master in case a failover occurs. In your case assuming canonical ZK ports you would set the --zk flag on the Mesos masters to --zk=zk://172.16.50.117:2181,172.16.50.115:2181/mesos (add a third ZK instance, see the first point above). The same value should be used for the --master flags in both the Mesos agents and Marathon, instead of specifying a single master.
It's best to run an odd number of masters in your cluster. To do so, either add another master so you have three or remove one so you have only one.
It's a known fact that it is not possible to create a cluster in a single machine by changing ports. The workaround is to add virtual Ethernet devices to our machine and use these to configure the cluster.
I want to deploy a cluster of , let's say 6 nodes, on two ec2 instances. That means, 3 nodes on each machine. Is it possible? What should be the seed nodes address, if it's possible?
Is it a good idea for production?
You can use Datastax AMI on AWS. Datastax Enterprise is a suitable solution for production.
I am not sure about your cluster, because each node need its own config files and it is default. I have no idea how to change it.
There are simple instructions here. When you configure instances settings, you have to write advanced settings for cluster, like --clustername yourCluster --totalnodes 6 --version community etc. You also can install Cassandra manually by installing latest version java and cassandra.
You can build cluster by modifying /etc/cassandra/cassandra.yaml (Ubuntu 12.04) fields like cluster_name, seeds, listener_address, rpc_broadcast and token. Cluster_name have to be same for whole cluster. Seed is master node, which IP you should add for every node. I am confused about tokens