I have 2 node K8 cluster (1 linux + 1 win node).
For windows I'm using 1803 version. The issue I'm facing with is that Pods in Windows node can't resolve domain of services created on K8 cluster and which points to Pods on linux machine.
I can run container inside Windows node, tried with iis and containers are started successfully but they cannot resolve dns -> if you try nslookup or even ping clusterIP od service.
On linux node everything is working fine and pods see each other. Any suggestions or doubts where is the problem?
Related
I have several Windows servers available and would like to setup a Kubernetes cluster on them.
Is there some tool or a step by step instruction how to do so?
What I tried so far is to install DockerDesktop and enable its Kubernetes feature.
That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible:
Docker desktop kubernetes add node
Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?
I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?
Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a "kubeadm for Windows"?
Would be great if you could give me some hint on the right direction.
Edit:
Related info about installing kubeadm inside Docker container:
https://github.com/kubernetes/kubernetes/issues/35712
https://github.com/kubernetes/kubeadm/issues/17
Related question about Minikube:
Adding nodes to a Windows Minikube Kubernetes Installation - How?
Info on kind (kubernetes in docker) multi-node cluster:
https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/
(Creates multi-node kubernetes cluster on single windows host)
Also see:
https://github.com/kubernetes-sigs/kind/issues/2652
https://hub.docker.com/r/kindest/node
You can always refer to the official kubernetes documentation which is the right source for the information.
This is the correct way to manage this question.
Based on Adding Windows nodes, you need to have two prerequisites:
Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are
using VXLAN/Overlay networking you must have also have KB4489899
installed.
A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see Creating a single control-plane cluster with kubeadm).
Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).
And once you have a proper running control plane, there's a kubeadm for windows to proper join Windows nodes to the kubernetes cluster. As well as a documentation on how to upgrade windows nodes.
For firewall and which ports should be open check ports and protocols.
For worker node (which will be windows nodes):
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All
Another option can be running windows nodes in cloud managed kuberneres, for example GKE with windows node pool (yes, I understand that it's not your use-case, but for further reference).
I tried to find the answer in previous post, but i did not find it !
My question seems dumb, i'm just trying to figure it out :)
I'm new to docker and kubernetes, i'm trying to understand the architecture of kubernetes cluster, nodes, and pods.
I'm using two machines with docker installed, each machine have two containers running, i want to install MicroK8s to start playing with kubernetes, my questions are :
As below image > Can I install it on separate machine and connect it to my docker host machines so it will manage my containers their with support of some sort of (agent/ maybe services) ?, Or kubernetes/MicroK8s must be installed on the machine that will host the containers ?
Can i add my running docker containers directly to a pod ? or i must re-create them ?
Many thanks
You can play with any VM software(cpu virtualization required).
You can set up 3 VMs(master, node1,node2). You have to install kubernetes in each VM. When you connect them thru calico they communicate each other. When you make pods with app or db , you can loadbalance to node1 and node2 or more from master. Then you can create a service to export route to the pods. Or If you want to run everything in one big server, you can. Horizontal scaling or vertical scaling is your choice.
you cant mount a running docker container to the pod but you can load a docker image from any registry.
I have two physical machines both running in the same network, and I made one of them a manager and the other one worker. The nodes join correctly and I was able to view them by running docker node ls.
In the docker yml file, I have 4 applications in total which two on them running on the manager node and others running on the worker node.
My issue is that the applications in the manager node cannot reach the applications in the worker node via the overlay network.
More information:
The manager node is running Ubuntu 18.04 LTS, and the worker node is running on a Mac mini(macOS 10.14.1). The architecture looks like the below:
I suspect this is a Mac issue. Any ideas?
I have been trying to work around similar issues. The root cause is because Docker Desktop for MacOS is not a "true docker" and it does not forward network requests from/to other hosts properly. Details are here: https://docs.docker.com/docker-for-mac/docker-toolbox/
The work around is to use Virtual Machines in MacOS (e.g., VirtualBox) by docker-machine command lines. Details are introduced in How to connect to a docker container from outside the host (same network) [OSX 10.11]
I have tried the VirtualBox path, adding a third Network Adapter with bridged mode, and I can finally ping 3 nodes from the container.
I have a simple one master (Ubuntu 1604), one worker (Windows Server 1803) Kubernetes cluster running in AWS. I am using Flannel for networking.
I have been able to deploy windows containers using kubectl from the master without issue. Deploying multiple pods shows they are able to talk to each other. But I am not able to ping or curl the pods from even the Kubernetes windows node host, or from the open internet. Also, the pods are not able to communicate with the outside internet either. (Can't curl external DNS names or even IP addresses.)
Side note: Deploying the same image directly with Docker on the Windows node is able to connect to the internet and be accessed over the internet.
I used the following setup from Microsoft, which uses kubeadm, flannel and scripts from Microsoft SDN repo.
https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs
It is my understanding that on Windows the host network interface is not connected to the Kubernetes network interface by default, but the Docker network uses the default interface. Which might be why docker deployments can be accessed but Kubernetes deployments cannot.
However, I haven't found info on connecting these networks when using Flannel for pod communication on Windows.
I can add any logs or config info that anyone thinks is useful.
Any thoughts? Thanks for your help!
More Details:
I am looking into this: https://unofficial-kubernetes.readthedocs.io/en/latest/getting-started-guides/windows/ which describes connecting network interfaces between the Windows default and Kubernetes, but does not seem to rely on the same Flannel Host-GW model I used to set this up.
After restarting my 3 masters in my DC/OS cluster, the DC/OS dashboard is showing 0 connected nodes. However from the DC/OS cli I see all 6 of my agent nodes:
$ dcos node
HOSTNAME IP ID
172.16.1.20 172.16.1.20 a7af5134-baa2-45f3-892e-5e578cc00b4d-S7
172.16.1.21 172.16.1.21 a7af5134-baa2-45f3-892e-5e578cc00b4d-S12
172.16.1.22 172.16.1.22 a7af5134-baa2-45f3-892e-5e578cc00b4d-S8
172.16.1.23 172.16.1.23 a7af5134-baa2-45f3-892e-5e578cc00b4d-S6
172.16.1.24 172.16.1.24 a7af5134-baa2-45f3-892e-5e578cc00b4d-S11
172.16.1.25 172.16.1.25 a7af5134-baa2-45f3-892e-5e578cc00b4d-S10`
I am still able to schedule tasks in Marathon both from the dcos cli and from the Marathon gui, they then are properly scheduled and executed on the agents. Also, from the mesos interface on :5050 I can see all of the agents in the slaves page.
I have restarted agent nodes and master nodes. I have also rerun the DC/OS GUI installer and run preflight check, which of course fails with an "already installed" error.
Is there a way to re-register the node with DC/OS GUI short of uninstalling/reinstalling a node?
For anyone who is running into this, my problem was related to our corporate proxy. In order to get the Universe working in my cluster I had to add proxy settings to /opt/mesosphere/environment. I then restarted the dcos-cosmos.service and life was good. However, upon server restart, dcos-history-service.service was now running with the new environment and was unable to resolve my local names with our proxy server. To solve, I added a NO_PROXY to the /opt/mesosphere/environment and DCOS dashboard is again happy.