I don't see a way to configure the cluster FQDN for On Premise installation.
I create a 6 nodes cluster (each nodes running on a physical server) and I'm only able to contact each node on their own IP instead of contacting the cluster on a "general FQDN". With this model, I'm to be are of which node is up, and which node is down.
Does somebody know how to achieve it, based on the sample configurations files provided with Service Fabric standalone installation package?
You need to add a network load balancer to your infrastructure for that. This will be used to route traffic to healthy nodes.
Related
What I am trying to accomplish here is, create a k8s cluster but the worker & master nodes are in different cloud.
Example, I have a VM instance in AWS & another one in GCP. I can't use them as master & worker node because they are not on the same network range.
My question, is it be possible to create a VPN which comprises of these machines & then host a k8s cluster on top of it so that I can use different machines in different cloud as my worker/master nodes.
Or there is some fundamental flaw in my understanding of k8s
I don't want to use IP-Sec or DRG which are not part of free tier to achieve this.
A number of Kubernetes Container Network Interface (CNI) plugins support overlay networks. An overlay network creates tunnels on a real network for the k8s nodes to communicate across physical subnets on what appears like a local interface.
Flannel does VXLAN (UDP)
Calico does IPIP or VXLAN
There are a number of issues you will need to tackle with a geographically disperse cluster.
How you schedule you application across the cluster appropriately.
How the nodes communicate with masters.
How your etcd cluster is structured.
The common solution to these problems is to run multiple clusters in each geo location and traffic manage them.
In my microservices system I plan to use docker swarm and Consul.
In order to ensure the high availability of Consul I’m going to build a cluster of 3 server agents (along with a client agent per node), but this doesn’t save me from local consul agent failure.
Am I missing something?
If not, how can I configure swarm to be aware of more than 1 consul agents?
Consul is the only service discovery backend that don't support multiple endpoints while using swarm.
Both zookeeper and etcd support the etcd://10.0.0.4,10.0.0.5 format of providing multiple Ip's for the "cluster" of discovery back-ends while using Swarm.
To answer your question how you can configure Swarm to support more than 1 consul (server) - I don't have a definitive answer to it but can point you in a direction and something you can test ( no guarantees ) :
One suggestion worth testing (which is not recommended for production) is to use a Load Balancer that can pass your requests from the Swarm manager to one of the three consul servers.
So when starting the swarm managers you can point to consul://ip_of_loadbalancer:port
This will however cause the LB to be a bottleneck (if it goes down).
I have not tested the above and can't answer if it will work or not - it is merely a suggestion.
Newbie w/ etcd/zookeeper type services ...
I'm not quite sure how to handle cluster installation for etcd. Should the service be installed on each client or a group of independent servers? I ask because if I'm on a client, how would I query the cluster? Every tutorial I've read shows a curl command running against localhost.
For etcd cluster installation, you can install the service on independent servers and form a cluster. The cluster information can be queried by logging onto one of the machines and running curl or remotely by specifying the IP address of one of the cluster member node.
For more information on how to set it up, follow this article
I spun up a Mesosphere cluster on Digital Ocean (development) and it's not allowing me to allow external (non vpn) connections to containers or apps. How can this be solved ?
To ensure that the world doesn't have access to your cluster normally, there have been iptables rules installed. By default, these allow full access inside the cluster and nothing externally.
If you're interested in running real applications, I'd recommend the following:
Put HAProxy on a single node.
Setup the haproxy-marathon-bridge script.
On the same box that you installed HAProxy on, setup iptables to allow access to the port that HAProxy is listening on.
By doing this, you'll have a single place to refer to when giving access to applications running on your Mesos cluster. No matter where the app or container is scheduled (with marathon), you'll always be able to reach it via. haproxy.
One aspect of Docker that has always made me thinking is what's the appropriate way to let different containers that collectively make up an application's architecture about each others IPs and host names.
So let's pretend that we have a Docker deployment topology similar to this:
Each orange node represents a Container. All the containers might be running on the same physical machine or each container might have its dedicated EC2 instance. It is also possible that all Author nodes reside on one EC2 instance, and all the Publisher nodes on their own EC2 instance.
But for the sake of this question, we can assume that they are all running on a Developer's local machine, or each one is running on its own dedicated EC2 instance.
How can I let all of the instances get IP addresses and/or hostnames of all the other instances as they come and go?
So when Author 2 dies, Author 1, Author N, as well as Publisher 1,2 and N should be notified of this event. Similarly, when a new Publisher joins the topology, all other instances should be notified of its IP address and/or hostname.
I am looking for a solution for both my local development environment, as well as one that's suitable for a production AWS deployment. Ideally, the same solution should work both locally and on AWS.
ETCd is exactly what you want. AWS doesn't support any sort of multicast or broadcast traffic on their native network. Etcd which is part of CoreOS gives you a framework for service discovery. It allows you to discover nodes along with registering nodes.
https://github.com/coreos/etcd