MARIADB GALERA CLUSTR - cluster-computing

I have a working cluster with three nodes on 192.168.14 subnet
I wanted to add an external machine to cluster ip 78.3.157.x
External machine fails to join cluster with "failed to open gcom backend connection 110"
Is such a configuration actually possible and if so how

I think you have a network level problem. The servers are not able to reach each other because you probably have some sort of non-transparent NAT device between the server. One of the IPs is public and the others are private.
Fix your networking layer so that all your nodes have full non-NAT:ed connectivity between each other. They do not need to be in the same subnet as long as the connectivity between them works in every direction.

Related

Cluster configuration on MonetDB: cannot discover other nodes

I have installed and configured a 3 monetdb nodes cluster on 3 virtual machines on my MacBook (Using Oracle Virtual Box). I use MonetDB 5 server 11.37.7
I have followed the Cluster Management documentation of MonetDB, but the monetdb discover command only returns the dbfarm of the local instance. Each node still isn't aware of other nodes.
I can connect to any nodes from any other node using monetdb -h [host] -P [passphrase], I can also discover the remote farms of a specific host by using monetdb -h host -P passphrase discover
The answer to this question monetdb cluster management can't setup helped me in setting the listenaddr property to 0.0.0.0, but still, the discover command only returns the local monetdb farm.
EDIT
Thanks to Jennie suggestion below, I noticed that the monetdb log file contains error while sending broadcast message: Network is unreachable.
I used netcat utility to brodcast UDP message from one node to the other 2 and it worked, I can ping, ssh and the 3 nodes are part of the same network configured with virtualbox, but the error is still there.
All your VMs must be in the same LAN environment. monetdb discover basically goes over all IP addresses under the same subnet.
Can you some how verify that's the case?
I got it working, thanks to #Jennie's post. For anyone using VirtualBox:
Use the first network adapter of each configured node with Bridge access instead of NAT
Configure the following property of your dbfarm: listenaddr=0.0.0.0
For testing purpose, it may be worth reducing the property discoveryttl to less than the default 10mns

How to host kubernetes cluster on VPN comprising of VM's from different cloud providers

What I am trying to accomplish here is, create a k8s cluster but the worker & master nodes are in different cloud.
Example, I have a VM instance in AWS & another one in GCP. I can't use them as master & worker node because they are not on the same network range.
My question, is it be possible to create a VPN which comprises of these machines & then host a k8s cluster on top of it so that I can use different machines in different cloud as my worker/master nodes.
Or there is some fundamental flaw in my understanding of k8s
I don't want to use IP-Sec or DRG which are not part of free tier to achieve this.
A number of Kubernetes Container Network Interface (CNI) plugins support overlay networks. An overlay network creates tunnels on a real network for the k8s nodes to communicate across physical subnets on what appears like a local interface.
Flannel does VXLAN (UDP)
Calico does IPIP or VXLAN
There are a number of issues you will need to tackle with a geographically disperse cluster.
How you schedule you application across the cluster appropriately.
How the nodes communicate with masters.
How your etcd cluster is structured.
The common solution to these problems is to run multiple clusters in each geo location and traffic manage them.

FQDN on Azure Service Fabric on Premise

I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.

Hadoop Cluster distributed in different sub-networks (Docker + Flannel)

I want to have Hadoop 2.3.0 in a multi bare-metal cluster using Docker. I have a master container and a slave container (in this first setup). When Master and Slave containers are in the same host (and therefore, same Flannel subnet), Hadoop works perfectly. However, if the Master and Slave are in different bare metal nodes (hence, different flannel subnets), it simply does not work (I get a connection refused error). Both containers can ping and ssh one another, so there is no connectivity problem. For some reason, it seems that hadoop needs all the nodes in the cluster to be in the same subnet. Is there a way to circumvent this?
Thanks
I think having the nodes in separate flannel subnets introduces some NAT-related rules which cause such issues.
See the below link which seems to have addressed a similar issue
Re: Networking Problem in creating HDFS cluster.
Hadoop uses a bunch of other ports for communication between the nodes, the above assumes these ports are unblocked.
ssh and ping are not enough. If you have iptables or any other firewalls, either you need to disable or open up the ports. You can set up the cluster, as long as hosts can communicate with each other and ports are open. Run telnet <namenode> <port> to ensure hosts are communicating on desired ports.

Do 2 different Availability Zones in EC2 act as a WAN or a LAN?

I am trying to establish a RabbitMQ Cluster on EC2 over 2 availability zones.
In its docs, RabbitMQ mentions to avoid network partition on the cluster nodes.
Do 2 different Availability Zones in EC2 act as a WAN or a LAN?
Can anyone direct me to a link?
Thank you.
RabbitMQ cluster is not recommended in case of WAN network (say 2 Regions) . But the connection between availability zones can be viewed as a LAN.
We running RabbitMQ cluster in differnt AZ's with no issues.
AWS doesn't tells you how far away each AZ from each other, but you can assume it's close enough to be viewed as a LAN. One of the characteristics of a LAN is Coverage area is generally a few kilometers.

Resources