How to configure kube-proxy master_url with multiple apiservers - proxy

I'm using a cluster setup with multiple apiservers with a loadbalancer in front of them for external access, with an installation on bare metal.
Like mentioned in the High Availability Kubernetes Clusters docs, I would like to use internal loadbalancing utilizing the kubernetes service within my cluster. This works fine so far, but I'm not sure what is the best way to set up the kube-proxy. It obviously cannot use the service IP, since it does the proxying to this one based on the data from the apiserver (master). I could use the IP of any one of the apiservers, but this would cause losing the high availability. So, the only viable option I currently see is to utilize my external loadbalancer, but this seems somehow wrong.
Anybody any ideas or best practices?

This is quite old question, but as the problem persists... here it goes.
There is a bug in the Kubernetes restclient, which does not allow to use more than one IP/URL, as it will pick up always the first IP/URL in the list. This affects to kube-proxy and also to kubelet, leaving a single point of failure in those tools if you don't use a load balancer (as you did) in a multi-master setup. The solution probably is not the most elegant solution ever, but currently (I think) is the easier one.
Other solution (which I prefer, but may not work for everyone and it does not solve all the problems) is to create a DNS entry that will round robin your API servers, but as pointed out in one of the links below, that only solves the load balancing, and not the HA.
You can see the progress of this story in the following links:
The kube-proxy/kubelet issue: https://github.com/kubernetes/kubernetes/issues/18174
The restclient PR: https://github.com/kubernetes/kubernetes/pull/30588
The "official" solution: https://github.com/kubernetes/kubernetes/issues/18174#issuecomment-199381822

I think the way it is meant to be set up is that you have a kube-proxy on each master node, so each kube-proxy points to its master on 127.0.0.1 / localhost
The podmaster determines which api-server should run, which in turns makes use of the local proxy of that master

Related

Openshift master slave

I have Openshift project with 3 pods: FE, BE1, BE2.
FE communicates with BE1 via REST API, BE1 with BE2 via REST API too.
I need to implement replication of pods. I have idea to make copy of pods, and if one of pod in set will not work, traffic will be redirected to another set.
It will be like this:
Set_1 : FEr1 -> BE1r1 -> BE2r1,
Set_2 : FEr2 -> BE1r2 -> BE2r2
FE is React react in container
BE1 and BE2 is Java apps in separate containers.
I don't know how to configure it. Every container contains pipeline configration and application.template files.
Somebody knows how is it possible to do, or maybe some another way to create it?
Thanks!
If I'm understanding you correctly, your question essentially boils down to "How do I run an active-passive K8S Service"? Because if I could give you answer on how to run an "active-passive service" for FEr1 / FEr2 then you could use the same technique for each pod in your "sets". So, to simplify my answer, I'm going to focus on how to have a single "active-passive" service. You can then you can extrapolate on your own how to create a chain of "active-passive" services.
I will begin with the fact there is no such native "active-passive" service object in Kubernetes or Openshift. It's kind of antithetical to most K8S design patterns. So you are going to have either change your architecture or you are going to have build something fairly customized.
When trying to find a link I could share to demonstrate some of your options, I found this blog post from Paul Dally which details most of the the options I was going to outline. It is a great exploration of active-passive services in Kubernetes. For convenience, I'm going to summarize here and add some commentary. But he goes into some great detail and I'd recommend reading the original blog post from Paul.
His option #1, and his recommended approach, is essentially "don't do that". He talks about the disadvantages of an active-passive approach and why K8S patterns generally don't take an active-passive approach. I concur: your best option is just to rearchitect your services so that they are not active-passive.
His option #2 is essentially another recommendation of "don't do that". I will paraphrase his second option as "if you are in a situation where you are forced to only have one active pod the more Kubernetes native approach would be to only run one pod". In this option you use only a single pod, but use Kubernetes native Deployments/Statefulsets and liveness probes to keep the single pod available. Obviously if your pod has slow startup, this has some challenges.
His option #3 is basically his option of last resort. To quote his article, "Make sure that you have fully considered and thoughtfully ruled out the preceding options before continuing with an active/passive load balancing approach." But then he details an approach where you could use a normal K8S Deployment/StatefulSet to create your pods and a normal K8S Service to route traffic between them. But, so that they don't have active-active traffic balancing you add an additional selector to the service e.g. "role=active". Since none of the pods will have this label, the selector will prevent either of the pods from being routed to.
But this leads to the trick: you create an additional Deployment (and Pod) whose sole job is to maintain that "role=active" label. It's perfectly possible to patch the labels of a running pod. So he provides some pseudo-code for a script that you could run in that "failover manager" pod. Essentially the "failover manager" is just checking for availability, by whatever rules you define, and then controls the failover from the active to passive pod by deleting and adding the label.
He does talk about the challenges of this. Including making sure it's hardened enough and has the proper permissions. I'd suggest that if you take this approach that you make it a full-fledged operator. Because essentially that's what this kind of approach is: writing a custom operator.
I will also, however, mention another similar approach that I'll call option #4. Essentially what you are doing with option #3 is create custom routing logic by patching the service. You could just embrace that customer routing approach and deploy something like your own HAProxy. I don't have a sample config for you. But active-passive failover is a fairly well explored area for an HAProxy. You are adding an additional layer of routing, but you are using more off the shelf functionality rather than patching services on-the-fly.

ElasticSearch replication home/server

I am running a local ElasticSearch server from my own home, but would like access to the content from outside. Since I am on a dynamic IP and besides that do not feel comfortable opening up ports to the outside, I would like to rent a VPS somewhere, setup ElasticSearch and let this server be a read only copy of the one I have at home.
As I understand it, this should be possible - however I have been unsuccessful at creating any usable version that lets another server be a read-only version of my home ES-server.
Can anyone point me to a piece of information or create a guide, that would help me to set this up? I am rather known to ES-usage, however my setup-skills are still vague.
As I understand it, this should be possible
It might be possible with some workarounds, but it's definitely not built for that:
One cluster needs to be in one physical region; mainly because of latency and the stability of the network connection.
There are no read-only versions. You could only allow read access to a node (via a reverse proxy or the security plugin), but that's only a workaround.

Shared IP in CoreOS

I am looking into using CoreOS at work and for a couple of projects where I want no single point of failure. CoreOS and Docker looks promising, and I can have hipache running talking to an ambassador container talking to the service. Basically, it can work.
But what about the shared public IP? How is that problem supposed to be solved? I can't find any good documentation on this. http://www.keepalived.org/ looks like something that would solve this problem. But is it the right tool in this situation?
Am I missing something here? Why isn't people talking more about this problem?
There are a few different methods of taking care of this. If you're using a cloud provider (EC2 / OpenStack / Google Compute Engine) there is the concept of a floating IP which can be moved via an API call. This gets rid of having to use things like VRRP directly.
In the long run this is best handled by utilizing DNS entries with a short TTL. Using that method also allows you the greater flexibility of having location aware applications (where DNS in different regions can route to the closest location), easy transition to IPv6, and failover across physical locations without needing to maintain your own internal routing infrastructure.
If you are using keepalived just add a startup service with the floating ip in your cloud-init so config in every node of the coreos cluster
- name: local-paas-ip.service
command: start
content: |
[Unit]
Description=Receive traffic from keepalived floating ip
[Service]
ExecStart=/usr/bin/sudo /usr/bin/ip addr add XXX.XXX.XXX.XXX dev lo:1
I have have the same question/doubts about if this is the right option but I need something working now.

Failover proxy on Amazon aws?

This is a fairly generic question. Suppose I have three ec2 boxes: two app boxes and a box that hosts nginx as a reverse proxy, delegating requests to the two app boxes (my database is hosted elsewhere). Now, the two app machines can absorb a failure amongst themselves, however the third one represents a single point of failure. How can I configure my setup so that if the reverse proxy goes down, the site is still available?
I am looking at keepalived and HAproxy. For me this stuff is non-obvious, and any help for the ears of a beginner is appreciated.
If your nginx does no much more than proxying HTTP requests, please have a look at Amazon Elastic Load Balancer. You can set up your two (or more) app boxes, leave some spare ones (in order to keep always two or more up, if you need it), set up health checks, have SSL termination at the balancer, make use of sticky sessions, etc.
There is a lot of people, though, that would like to see the ability to set elastic IP addresses to ELBs, and others with good arguments why it is not neeeded.
My suggestions is that you take a look at ELB documentation, as it seems to perfectly fit your needs. I also recommend reading this interesting post for a good discussion on this subject.
I think if you are a beginner with HA and clusters, your best solution is Elastic Load Balancer (ELB) which is maintained by Amazon. They scale up automatically and implements a high availability cluster of balancers. So using ELB service you already mitigate the point of failure that you commented. Also it's important to have in mind that an ELB is cheaper than 2 instances in AWS. And of course it's easier to launch and maintain.
You can't see multiple ELB because it is a service, so you don't have to take care of the availability.
Other important point is that AWS elastic ips aren't assigned to NIC interface of your OS instance, so use virtual ips as well in classical infrastructures it's difficult.
After this explanation, if you still want Nginx as a proxy reverse in AWS because your reasons, I think you can implement an autoscaling group with a layer composed by Nginx instances. But if you aren't expert in autoscaling technology, it could be very tricky.

Managing server instance identity on EC2

I recently brought up a cluster on EC2, and I felt like I had to invent a lot of things. I'm wondering what kinds of tools, patterns, ideas are out there for how to deal with this.
Some context:
I had 3 different kinds of servers, so first I created AMIs for each of them. The first AMI had zookeeper, so step one in deploying the system was to get the zookeeper server running.
My script then made a note of the mapping between EC2's completely arbitrary and unpredictable hostnames, and the zookeeper server.
Then as I brought up new instances of the other 2 kinds of servers, the first thing I would do is ssh to the new server, and add the zookeeper server to its /etc/hosts file. Then as the server software on each instance starts up, it can find zookeeper.
Obviously this is a problem that lots of people have to solve, and it probably works a little bit differently in different clouds.
Are there products that address this concept? I was pretty surprised that EC2 didn't provide some kind of way to tie your own name to its name.
Thanks for any ideas.
How to do some service discovery on Amazon EC2 seems to have some good options.
I think you might want to look at http://puppetlabs.com/mcollective/introduction/ and the suite of tools from http://puppetlabs.com in general.
From the site:
The Marionette Collective AKA MCollective is a framework to build server orchestration or parallel job execution systems.
Primarily we’ll use it as a means of programmatic execution of Systems Administration actions on clusters of servers. In this regard we operate in the same space as tools like Func, Fabric or Capistrano.
I am fairly certain mcollective was built to solve exactly the problem you are trying to address. But, be forewarned, it's not a DNS-based solution, it's a method of addressing arbitrarily large and arbitrarily tagged groups of hosts.

Resources