Kubernetes on Windows: Can't connect to Pods from node host server or Internet - windows

I have a simple one master (Ubuntu 1604), one worker (Windows Server 1803) Kubernetes cluster running in AWS. I am using Flannel for networking.
I have been able to deploy windows containers using kubectl from the master without issue. Deploying multiple pods shows they are able to talk to each other. But I am not able to ping or curl the pods from even the Kubernetes windows node host, or from the open internet. Also, the pods are not able to communicate with the outside internet either. (Can't curl external DNS names or even IP addresses.)
Side note: Deploying the same image directly with Docker on the Windows node is able to connect to the internet and be accessed over the internet.
I used the following setup from Microsoft, which uses kubeadm, flannel and scripts from Microsoft SDN repo.
https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs
It is my understanding that on Windows the host network interface is not connected to the Kubernetes network interface by default, but the Docker network uses the default interface. Which might be why docker deployments can be accessed but Kubernetes deployments cannot.
However, I haven't found info on connecting these networks when using Flannel for pod communication on Windows.
I can add any logs or config info that anyone thinks is useful.
Any thoughts? Thanks for your help!
More Details:
I am looking into this: https://unofficial-kubernetes.readthedocs.io/en/latest/getting-started-guides/windows/ which describes connecting network interfaces between the Windows default and Kubernetes, but does not seem to rely on the same Flannel Host-GW model I used to set this up.

Related

How to setup Kubernetes cluster on several windows hosts?

I have several Windows servers available and would like to setup a Kubernetes cluster on them.
Is there some tool or a step by step instruction how to do so?
What I tried so far is to install DockerDesktop and enable its Kubernetes feature.
That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible:
Docker desktop kubernetes add node
Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?
I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?
Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a "kubeadm for Windows"?
Would be great if you could give me some hint on the right direction.
Edit:
Related info about installing kubeadm inside Docker container:
https://github.com/kubernetes/kubernetes/issues/35712
https://github.com/kubernetes/kubeadm/issues/17
Related question about Minikube:
Adding nodes to a Windows Minikube Kubernetes Installation - How?
Info on kind (kubernetes in docker) multi-node cluster:
https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/
(Creates multi-node kubernetes cluster on single windows host)
Also see:
https://github.com/kubernetes-sigs/kind/issues/2652
https://hub.docker.com/r/kindest/node
You can always refer to the official kubernetes documentation which is the right source for the information.
This is the correct way to manage this question.
Based on Adding Windows nodes, you need to have two prerequisites:
Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are
using VXLAN/Overlay networking you must have also have KB4489899
installed.
A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see Creating a single control-plane cluster with kubeadm).
Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).
And once you have a proper running control plane, there's a kubeadm for windows to proper join Windows nodes to the kubernetes cluster. As well as a documentation on how to upgrade windows nodes.
For firewall and which ports should be open check ports and protocols.
For worker node (which will be windows nodes):
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All
Another option can be running windows nodes in cloud managed kuberneres, for example GKE with windows node pool (yes, I understand that it's not your use-case, but for further reference).

docker desktop kubernetes - how to map ports with ClusterFirstWithHostNet

I'm using kubernetes from docker for windows and I encountered problem. I use statefulset with following part of config:
spec:
terminationGracePeriodSeconds: 300
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
In classic kubernetes this spec exposes all ports from pod on node ip, so all of them can be accessed through it. I'm trying to develop it on kubernetes from docker for windows, but it seems that I cannot access node by it's ip (like in minikube or microk8s), but docker for windows maps localhost to the cluster. So here is a problem: this config exposes all ports on node ip, which is for example 192.168.65.4, but i cannot access it from windows - I can only access cluster via localhost, but it only exposes protocol related port, for example 443. So when my service runs on port i.e. 10433, there is no access from localhost:10433 but also there is no access in general through node ip. Is there any way to configure it to work as classic kubernetes, where all ports are exposed? I know that single port can be exposed through NodePort, but it's important for me to expose all ports from the pod to imitate real kubernetes behaviour
In general, Docker host networking doesn't work on non-Linux platforms. It's accepted as a valid Docker option, but the "host" network isn't actually the physical system's network. This probably applies to the Kubernetes setup embedded in Docker Desktop as well.
It should be pretty rare to need host networking, and even more unusual in Kubernetes. Host networking disables the normal inter-container communication mechanisms. Kubernetes in particular has a complex network environment and there is usually more than one node; opting out of the network setup like this can make it all but impossible to reach your service, either from inside the cluster or outside.
Instead of host networking, you should use the normal Kubernetes networking setup. Pretty much every Deployment you create will need a matching Service, and if you set that Service to have type: nodePort then it will be accessible from outside the cluster (try both the assigned nodePort: number and the service's cluster-internal port:; it's not clear which port Docker Desktop actually uses).
For some purposes, the easiest approach is to set up a local port-forward to the service
kubectl port-forward deployment/some-deployment 8888:3000
will set up a port-forward from port 8888 on the local system to port 3000 on some pod managed by the named deployment. This forwards to a single pod (if you have multiple replicas, it targets only one of them), it's slower than a direct connection, and the port-forward will fail occasionally, but this is good enough for maintenance tasks like database migrations.
imitate real kubernetes behaviour
In the environment I work on normally, each cluster has dozens to hundreds of nodes. The nodes can't be directly accessed from outside the cluster. It's also reasonably common to configure a PodSecurityPolicy to disallow host networking since it can be viewed as a security concern.

Kubernete NAT pod IP on Windows Nodes

I have a hybrid GKE Cluster running with some Linux and Windows nodes. I followed this how-to (https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent) in order to configure the masquerade for some of my networks and it works like a charm on Linux Nodes. But it doesn't work on the windows hosts, it gives me this error:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "ip-masq-agent-pc9vn": Error response from daemon: network host not found
Anyone knows how can I configure masquerade on Windows Nodes?
Adding details:
I know that Linux containers don't run on Windows nodes, so ip-masq-agent won't run on that node and I know that I can use taints or labels to avoid the pods to be scheduled on that node.
I use Windows nodes with kubernetes because I have some .Net Framework applications running on it, and it works fine. My problem is that I need to masquerade the connections from the pod to hosts outside of the cluster because the source connections are the Pod IPs, not the node IP.
On Linux machines, I can do that using ip-masq-agent, that mange Iptables rules to masquerade the traffic. But on Windows, the ip-masq-agent doesn't work, for the reasons that #Rico said in his answer.
I want to know if someone knows another way to achieve the same thing on Windows nodes.
I can use a "NAT Machine" holding all connections in the middle and route all traffic to that machine, but it's a really ugly way to do that.
Solution:
I end up allowing the pod network to go through VPN. Thank you for all the replies.
The simple answer is you can't. iptables is a Linux thing. Windows has some alternatives that you can use to set up NAT (netsh) like described here: https://superuser.com/questions/1088309/windows-10-nat-port-forwarding-ip-masquerade, but there's no specific K8s support so you will be on your own.
To make sure your ip-masq-agent doesn't get scheduled on your Windows nodes you can follow a NodeSelector, Taint/Toleration approach as described here.
A wider question would be what are you trying to run on the Windows machines? Windows containers are not interchangeable with Linux containers. If you want your Linux pods and Windows pods to talk to each other have you tried Flannel?

Docker for Windows Swarm IIS Service with Win10 Insider running but unreachable

I'm currently experimenting with Swarm Services with Docker for Windows. The new Win10 Insider build supports overlay networking for Windows containers and I was pleased to see my IIS service actually starting. The only issue i came across is that i can not reach the service in the browser, despite trying multiple things such as different ports and networks. The command issued is as following:
docker service create --name webfarm -p 80:80 microsoft/iis
I have also tried to use the --network flag to try different networks and I have made sure to test all IP addresses visible in the docker service inspect webfarm command.
docker service ps webfarm does indicate that my service is in state RUNNING and does not have any errors, so i don't know what else i can try. Especially since these commands worked fine on Linux with Apache.
I was wondering if anyone has been able to successfully create a service using Windows Containers on the Windows Insider build (15046), and if so, how?
Never mind, i found this actually is not supported yet.
The following source states:
"At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future." (https://stefanscherer.github.io/docker-swarm-mode-windows10/)
And indeed, the blogposts states the following:
"Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance." (https://blogs.technet.microsoft.com/virtualization/2017/02/09/overlay-network-driver-with-support-for-docker-swarm-mode-now-available-to-windows-insiders-on-windows-10/)
I guess I'll have to wait for this feature, in the meantime I will use the alternative.

Exposing several services with Vagrant and Kubernetes on my own server

Assume the following stack:
A dedicated server
The server is running Vagrant
Vagrant is running 2 virtual machines master + minion-1 (Kubernetes)
minion-1 is running a pod
Within the pod is 2 containers: webservice and fileservice
Both webservice and fileservice should be accessible from internet i.e. from outside. Either by web.mydomain.com - file.mydomain.com or www.mydomain.com/web/ - www.mydomain.com/file/
Before using Kubernetes, I was using a remote proxy (HAproxy) and simply mapped domain names to an internal ip / port.
Now with Kubernetes, I can imagine there is something dedicated to this task but I honestly have no clue from where to start.
I read about "createExternalLoadBalancer", kubernetes Services and kube-proxy. Should a reverse-proxy still be put somewhere (before vagrant or within a pod ?) also is using Vagrant a good option for production (staying in the scope of this question) ?
The easiest thing for you to do at the moment is to make a service of type "nodePort", and to configure your HAproxy to point at minion-1:.
createExternalLoadBalancer is the old, less flexible, way to do this--it requires the cloud provider to do work. Type=nodePort doesn't require anything special from the cloud provider.

Resources