Rethinkdb failover cluster with just 2 nodes - rethinkdb

I would like to setup an IOT reporting database, with failover. The idea is to have a cluster with 2 nodes, one in a datacenter one at home.
If "home" looses internet connection, it continues to operate and upon
online status, "datacenter" would align to offline changes.
Now, I read the rethinkdb docs, that you need at least 3 nodes for a failover to function.
So the question is, is my scenario doable with just 2 nodes, if yes, how?

According to the docu https://www.rethinkdb.com/docs/start-a-server/
First, start RethinkDB on the first machine:
$ rethinkdb --bind all
Then start RethinkDB on the second machine:
$ rethinkdb --join IP_OF_FIRST_MACHINE:29015 --bind all

Related

How to setup 2 nodes on elasticsearch?

Hello enthusiastic people.
I am a student trying to learn Elastic stack.
I have 1 node installed on my local machine. I have also successfully installed beats on my other local machine to get data and deliver it to my logstash.
My question is, what if I add another node, do I still need to install kibana and elasticsearch? Then connect it from my first node?
I just read a lot that a single node is prone to data loss.
Sorry for my noob question.
Your answer is very appreciated.
Thanks in advance.
Having a cluster with at least 3 nodes would be good to ensure data security and integrity.
A cluster can have one or more nodes.
An example scenario:
It will be easier for you to install with docker during the learning and development process. I recommend you follow the link below. This link explains how to set up an elasticsearch cluster with 3 nodes on docker.
Start a multi-node cluster with Docker Compose

how to configure and install a standby master in greenplum?

Ive installed a single node greenplum db with 2 segment hosts , inside them residing 2 primary and mirror segments , and i want to configure a standby master , can anyone help me with it?
It is pretty simple.
gpinitstandby -s smdw -a
Note: If you are using one of the cloud Marketplaces that deploys Greenplum for you, the standby master runs on the first segment host. The overhead of running the standby master is pretty small so it doesn't impact performance. The cloud Marketplaces also have self-healing so if that nodes fails, it is replaced and all services are automatically restored.
As Jon said, this is fairly straightforward. Here is a link to the documentation: https://gpdb.docs.pivotal.io/5170/utility_guide/admin_utilities/gpinitstandby.html
If you have follow up questions, post them here.

how to do VerneMQ cluster with 3 nodes?

i am a new user in MQTT,
and would to create a vernemq cluster of three nodes , how can i do this ? (with mosquitto client) please .
I have try to do it with the bridge in two distinct VM on ubuntu 18 but i haven't succes .
First you need to have 3 running VerneMQ nodes. Then you'll join one node to the other like this:
vmq-admin cluster join discovery-node=<OtherClusterNode>
Then you check the cluster state (you should see a 2 node cluster):
vmq-admin cluster show
Then you repeat the first command and join the 3rd node to the cluster (the discovery-node can be any node in the existing cluster).
Note: your VerneMQ nodes need to be configured correctly, namely with regard to configured listeners and ports. See here:
https://vernemq.com/docs/clustering/communication.html
If you use cloud VMs/Docker or similar, make sure you configure access accordingly.
You need to take care of a couple things if you want to run multiple VerneMQ instances on the same machine. There is a make option that let's you build multiple releases, as a commodity. This will prepare 3 correctly configured vernemq.conf files, with different ports for the MQTT listeners etc.
➜ default git:(master) ✗ make dev1 dev2 dev3
This will prepare different vernemq.conf files in the respective release builds. (look at them in the _build directory after having built the releases.)
You can then start the respective broker instances in 3 terminal windows.
Hope this helps.
EDIT: can't comment yet, so had to add this as an answer.

Elasticsearch in production with kubernetes

I am working on product in which we are using elasticsearch for search. Our production setup is in K8S (1.7.7) and we are able to scale it pretty well. Only thing I am not sure about is whether we should be hosting elasticsearch in k8s (it can go on dedicated host as well using label selector nodes) or it is advisable to host elasticsearch on VM than docker.
Our data set size is 2-3 GB and would go further. But this is the benchmark we can consider.
And elasticsearch cluster I am planning to have ti is - 3 master (with 2 as eligible master), one client node, and one data node. We can scale datanode and client node as data increases.
Is anyone did this before? thanks in advance.
IMO the best resource for Elasticsearch on Kubernetes is https://github.com/pires/kubernetes-elasticsearch-cluster
Note that while there are official Docker containers, no official solution for orchestration is being provided at the moment. This is currently covered by the community only.
3 master (with 2 as eligible master)
This doesn't make much sense. You'll want 3 master eligible nodes with the setting discovery.zen.minimum_master_nodes: 2 and one of the 3 nodes will be the actual master.

ejabberd Cluster Not Working

I've set up a ejabberd (v 15.04) cluster on AWS using 2 Ubuntu images. Whilst I am able to successfully cluster the two (using the command join_cluster from the 2nd node to the 1st node), I am not sure if the behavior is as expected... any thoughts would be much appreciated...
To detail the above, 2 different clients connected to the 2 nodes separately can communicate with each other. However, when I stop the server on the secondary node, I would still expect the two clients to be able to talk to each other. But instead, this 2nd client simply gets disconnected as the server is not running.
Is there something possibly that am overlooking here?
Many thanks!
join two node with join_as_master() method.
cluster code is available on github site.
for doing the Ejabberd Clustering I followed the steps from the link below :
Link : http://chadillac.tumblr.com/post/35967173942/easy-ejabberd-clustering-guide-mnesia-mysql
I hav done the Clustering with no mysql tables only mnesia database
Imp Note :
1) The ejabberd.yml file should be same as in the master host.
2) Copy the .erlang.cookies file frm the master to the slaves
3) The slave host name will be mentioned in the ejabberdctl.cfg which will different from that mentioned in yml file of the slave.
4) For my sql, as in we are creating a totally different machine ..no need to add into the cluster.

Resources