I have several Windows servers available and would like to setup a Kubernetes cluster on them.
Is there some tool or a step by step instruction how to do so?
What I tried so far is to install DockerDesktop and enable its Kubernetes feature.
That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible:
Docker desktop kubernetes add node
Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?
I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?
Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a "kubeadm for Windows"?
Would be great if you could give me some hint on the right direction.
Edit:
Related info about installing kubeadm inside Docker container:
https://github.com/kubernetes/kubernetes/issues/35712
https://github.com/kubernetes/kubeadm/issues/17
Related question about Minikube:
Adding nodes to a Windows Minikube Kubernetes Installation - How?
Info on kind (kubernetes in docker) multi-node cluster:
https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/
(Creates multi-node kubernetes cluster on single windows host)
Also see:
https://github.com/kubernetes-sigs/kind/issues/2652
https://hub.docker.com/r/kindest/node
You can always refer to the official kubernetes documentation which is the right source for the information.
This is the correct way to manage this question.
Based on Adding Windows nodes, you need to have two prerequisites:
Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are
using VXLAN/Overlay networking you must have also have KB4489899
installed.
A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see Creating a single control-plane cluster with kubeadm).
Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).
And once you have a proper running control plane, there's a kubeadm for windows to proper join Windows nodes to the kubernetes cluster. As well as a documentation on how to upgrade windows nodes.
For firewall and which ports should be open check ports and protocols.
For worker node (which will be windows nodes):
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All
Another option can be running windows nodes in cloud managed kuberneres, for example GKE with windows node pool (yes, I understand that it's not your use-case, but for further reference).
Related
Should I run consul slaves alongside nomad slaves or inside them?
The later might not make sense at all but I'm asking it just in case.
I brought my own nomad cluster up with consul slaves running alongside nomad slaves (inside worker nodes), my deployable artifacts are docker containers (java spring applications).
The issue with my current setup is that my applications can't access consul slaves (to read configurations) (none of 0.0.0.0, localhost, worker node ip worked)
Lets say my service exposes 8080, I configured docker part (in hcl file) to use bridge as network mode. Nomad maps 8080 to 43210.
Everything is fine until my service tries to reach the consul slave to read configuration. Ideally giving nomad worker node IP as consul host to Spring should suffice. But for some reason it's not.
I'm using latest version of nomad.
I configured my nomad slaves like https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/client1.hcl
And the link below shows how I configured/ran my consul slave:
https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Note: if I use static port mapping and host as the network mode for docker (in nomad) I'll be fine but then I can't deploy more than one instance of each application in each worker node (due to port conflic)
Nomad jobs listen on a specific host/port pair.
You might want to ssh into the server and run docker ps to see what host/port pair the job is listening on.
a93c5cb46a3e image-name bash 2 hours ago Up 2 hours 10.0.47.2:21435->8000/tcp, 10.0.47.2:21435->8000/udp foo-bar
Additionally, you will need to ensure that the consul nomad job is listening on port 0.0.0.0, or the specific ip of the machine. I believe that is this config value: https://www.consul.io/docs/agent/options.html#_bind
All those will need to match up in order to consul to be reachable.
More generally, I might recommend: if you're going to run consul with nomad, you might want to switch to host networking, so that you don't have to deal with the specifics of the networking within a container. Additionally, you could schedule consul as a system job so that it is automatically present on every host.
So I managed to solve the issue like this:
nomad.job.group.network.mode = host
nomad.job.group.network.port: port "http" {}
nomad.job.group.task.driver = docker
nomad.job.group.task.config.network_mode = host
nomad.job.group.task.config.ports = ["http"]
nomad.job.group.task.service.connect: connect { native = true }
nomad.job.group.task.env: SERVER_PORT= "${NOMAD_PORT_http}"
nomad.job.group.task.env: SPRING_CLOUD_CONSUL_HOST = "localhost"
nomad.job.group.task.env: SPRING_CLOUD_SERVICE_REGISTRY_AUTO_REGISTRATION_ENABLED = "false"
Running consul agent (slaves) using docker-compose alongside nomad agent (slave) with host as network mode + exposing all required ports.
Example of nomad job: https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/nomad/location-update-publisher.hcl
Example of consul agent config (docker-compose file): https://github.com/bmd007/statefull-geofencing-faas/blob/master/infrastructure/server2.yml
Disclaimer: The LAB is part of Cluster Visualization Framework called: LiteArch Trafik which I have created as an interesting exercise to understand Nomad and Consul.
It took me long time to shift my mind from K8S to Nomad and Consul,
Integration them was one of my effort I spent in the last year.
When service resolution doesn't work, I found out it's more or less the DNS configuration on servers.
There is a section for it on Hashicorp documentation called DNS Forwarding
Hashicorp DNS Forwarding
I have created a LAB which explains how to set up Nomad and Consul.
But you can use the LAB seperately.
I created the LAB after learning the hard way how to install the cluster and how to integrate Nomad and Consul.
With the LAB you need Ubuntu Multipass installed.
You execute one script and you will get full functional Cluster locally with three servers and three nodes.
It shows you as well how to install docker and integrate the services with Consul and DNS services on Ubuntu.
After running the LAB you will get the links to Nomad, Fabio, Consul.
Hopefully it will guide you through the learning process of Nomad and Consul
LAB: LAB
Trafik:Trafik Visualizer
I am migrating a standard all-linux nomad/consul cluster where the nomad/consul servers use almost no resources with our workloads, and spinning up dedicated linux VMs just for them in our new environment seems a bit wasteful, when the environment I am moving to has multiple windows VMs with spare capacity which I could use for the nomad server and consul server processes to give me the necessary redundancy.
So my question boils down to: If I have the consul server and nomad server processes exclusively on windows and the nomad agent and consul agent processes exclusively on linux-- will they all just get along? The nomad jobs are all dockerized except for a native system prometheus exporter.
Both Consul and Nomad are operating system agnostic. You can use a mix of OS's within your cluster without issue. The main requirement is that you have direct IP connectivity between the agents (i.e., no NAT), low latency (sub 10ms), and the required ports opened for Consul and/or Nomad agent communication.
See https://www.consul.io/docs/install/ports and https://www.nomadproject.io/docs/install/production/requirements#ports-used for more detail.
I have a hybrid GKE Cluster running with some Linux and Windows nodes. I followed this how-to (https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent) in order to configure the masquerade for some of my networks and it works like a charm on Linux Nodes. But it doesn't work on the windows hosts, it gives me this error:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "ip-masq-agent-pc9vn": Error response from daemon: network host not found
Anyone knows how can I configure masquerade on Windows Nodes?
Adding details:
I know that Linux containers don't run on Windows nodes, so ip-masq-agent won't run on that node and I know that I can use taints or labels to avoid the pods to be scheduled on that node.
I use Windows nodes with kubernetes because I have some .Net Framework applications running on it, and it works fine. My problem is that I need to masquerade the connections from the pod to hosts outside of the cluster because the source connections are the Pod IPs, not the node IP.
On Linux machines, I can do that using ip-masq-agent, that mange Iptables rules to masquerade the traffic. But on Windows, the ip-masq-agent doesn't work, for the reasons that #Rico said in his answer.
I want to know if someone knows another way to achieve the same thing on Windows nodes.
I can use a "NAT Machine" holding all connections in the middle and route all traffic to that machine, but it's a really ugly way to do that.
Solution:
I end up allowing the pod network to go through VPN. Thank you for all the replies.
The simple answer is you can't. iptables is a Linux thing. Windows has some alternatives that you can use to set up NAT (netsh) like described here: https://superuser.com/questions/1088309/windows-10-nat-port-forwarding-ip-masquerade, but there's no specific K8s support so you will be on your own.
To make sure your ip-masq-agent doesn't get scheduled on your Windows nodes you can follow a NodeSelector, Taint/Toleration approach as described here.
A wider question would be what are you trying to run on the Windows machines? Windows containers are not interchangeable with Linux containers. If you want your Linux pods and Windows pods to talk to each other have you tried Flannel?
I tried to find the answer in previous post, but i did not find it !
My question seems dumb, i'm just trying to figure it out :)
I'm new to docker and kubernetes, i'm trying to understand the architecture of kubernetes cluster, nodes, and pods.
I'm using two machines with docker installed, each machine have two containers running, i want to install MicroK8s to start playing with kubernetes, my questions are :
As below image > Can I install it on separate machine and connect it to my docker host machines so it will manage my containers their with support of some sort of (agent/ maybe services) ?, Or kubernetes/MicroK8s must be installed on the machine that will host the containers ?
Can i add my running docker containers directly to a pod ? or i must re-create them ?
Many thanks
You can play with any VM software(cpu virtualization required).
You can set up 3 VMs(master, node1,node2). You have to install kubernetes in each VM. When you connect them thru calico they communicate each other. When you make pods with app or db , you can loadbalance to node1 and node2 or more from master. Then you can create a service to export route to the pods. Or If you want to run everything in one big server, you can. Horizontal scaling or vertical scaling is your choice.
you cant mount a running docker container to the pod but you can load a docker image from any registry.
I have a simple one master (Ubuntu 1604), one worker (Windows Server 1803) Kubernetes cluster running in AWS. I am using Flannel for networking.
I have been able to deploy windows containers using kubectl from the master without issue. Deploying multiple pods shows they are able to talk to each other. But I am not able to ping or curl the pods from even the Kubernetes windows node host, or from the open internet. Also, the pods are not able to communicate with the outside internet either. (Can't curl external DNS names or even IP addresses.)
Side note: Deploying the same image directly with Docker on the Windows node is able to connect to the internet and be accessed over the internet.
I used the following setup from Microsoft, which uses kubeadm, flannel and scripts from Microsoft SDN repo.
https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs
It is my understanding that on Windows the host network interface is not connected to the Kubernetes network interface by default, but the Docker network uses the default interface. Which might be why docker deployments can be accessed but Kubernetes deployments cannot.
However, I haven't found info on connecting these networks when using Flannel for pod communication on Windows.
I can add any logs or config info that anyone thinks is useful.
Any thoughts? Thanks for your help!
More Details:
I am looking into this: https://unofficial-kubernetes.readthedocs.io/en/latest/getting-started-guides/windows/ which describes connecting network interfaces between the Windows default and Kubernetes, but does not seem to rely on the same Flannel Host-GW model I used to set this up.