For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586
Related
Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.
I tried to run this hello world app on an AWS EC2 instance with docker-compose up --build . It works as expected and is accessible remotely from the EC2 public IP when I use port 80 i.e., "80:80" as shown in the docker-compose file.
However, if I change to another port such as "5106:80", it is not accessible from a remote host using <public IPv4 address>:5106 even though it's available locally if I ssh unto the EC2 instance and try localhost:5106. Please note:
I've ensured the EC2 is in a public subnet and I have configured the security group to make the port (in this case, 5106) accept inbound traffic from my laptop.
I know it's not a problem with the hello-world app because I experience exactly the same problem with another app i.e., only port 80 works with docker-compose port mapping on EC2.
As it works with port 80 and doesn't work with port 5106 it could mean one of two possibilities:
There is an issue with your security groups. You should check you have added port 5106 in your inbound rules of your security group.
There is an issue with a firewall or antivirus that doesn't allow you to connect to web pages in different ports rather than 80 or 443. You may try if this happens with another device or on another network.
In this case, it seemed to be the latter.
Possible that the docker network needs to be deleted?
docker network rm $(docker network ls -q)
Then run docker-compose up again.
I've been experimenting with an ICP instance (ICP 2.1.0.2): 1 master node and 2 worker nodes.
I noticed that the pods in my ICP Kubernetes cluster don't have outbound Internet connectivity (or are having DNS lookup issues)
For example, If I start up a busybox pod in my cluster, and try to do "nslookup github.com" or "ping google.com" .. it fails..
kubectl run curl --image=radial/busyboxplus:curl -i --tty
root#curl-545bbf5f9c-gssbg:/ ]$ nslookup github.com
Server: 10.0.0.10
Address 1: 10.0.0.10
nslookup: can't resolve 'github.com'
I checked and saw that "kube-dns" (service, pod, daemonset.extensions, daemonset.apps) does appear to be running.
When I'm logged into (eg. SSH) to the ICP master and the worker nodes machines, I am able to ping these external sites successfully.
Any suggestions for how to troubleshoot this problem? Thanks!
We had kind of the reverse problem - where we could look up anything on internet or other domains, but not the domain in which the cluster was deployed.
That turned out to be the vague documentation around what cluster_domain and cluster_CA_domain mean in the config.yaml. But as a plus we got to learn a bit more about those and about configuring kube-dns.
Basically cluster_domain should be a private virtual domain to the cluster for which kube-dns will be authoritative. Anything else it should use the host's resolve.conf nameservers as upstream servers. If you suspect that your DNS servers are not being utilised for public DNS then you can update the kube-dns configMap to specify the upstream servers that it should use.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
This is assuming you have configure cluster_domain, cluster_CA_domain correctly of course.
They should look something like
cluster_domain = mycluster.icp <----- could be "Mickey-mouse" for all it matters
cluster_CA_domain = icp.mycompany.com <----- the endpoint that portal/registry/api etc are accessible to users on
I am running IBM Cloud Private using 5 VMs on my laptop. My home network subnet is 192.168.100 whereas the subnet used by all 5 VMs is 192.168.142. I am port forwarding 8443 from the VMware Workstation from host to the master node which is 192.168.142.103. My laptop IP is 192.168.100.201.
I was hoping that I should be able to access this Web UI from any other machine in my home network and I tried this URL from other machine:
https://192.168.100.201:8443
And, it directs properly to the guest VM as I see the url changes to :
https://192.168.100.201:8443/console/
But, after few seconds, I get the message that the site cannot be reached. I noticed that the url has changed from original host laptop address of 192.168.100.201 address to the Guest VM address 192.168.142.103 as shown:
https://192.168.142.103:8443/idauth/oidc/endpoint/OP/authorize?client_id=617a0480d5e506a5e797f852bea1df38&response_type=code&scope=openid%20email%20profile&redirect_uri=https://192.168.100.201:8443/auth/liberty/callback
This seems like that the redirection in the Web UI is not handled properly.
However, I installed kubectl for Windows on another machine and I did the port 8001 forward from 192.168.100.201 to the VM's master Guest 192.168.142.103 and added kubectl set config commands (from web UI Client Configure option) on my other laptop (192.168.100.202).
kubectl config set-cluster pot_icp_cluster.icp --server=https://192.168.100.201:8001 --insecure-skip-tls-verify=true
kubectl config set-context pot_icp_cluster.icp-context --cluster=pot_icp_cluster.icp
kubectl config set-credentials admin --token=<token>
kubectl config set-context pot_icp_cluster.icp-context --user=admin --namespace=default
kubectl config use-context pot_icp_cluster.icp-context
And, this works perfect as I am able to run kubectl commands from the other laptop (192.168.100.202) to the VMs running on another laptop (192.168.100.201) using port forwarding same way I did for the Web UI.
My question is: Is there something that I can do to get this redirection problem fixed in the Web UI?
I received a reply from an expert that liberty server that authenticates and verifies a login has only the master node's IP address registered with it as a callback URL during the installation. In the version of IBM Cloud Private 2.1.0.1, there is no direct way to register the new clients. However, this limitation is being fixed and starting next upgrade, we should be able to register new clients dynamically post install also.
In my job I working with docker and the option --net=host working like a charm forwarding the docker container ports to the machine. This allows me to adding grunt tasks that use certain ports by example:
A taks for serving my coverage report in a port 9001
A local deployed version of my app served in the port 9000
A watch live reload the port 35729
For Unit testing runner use the 9876 port
When I begin to use Docker in Mac, the first problem that i had was: The option --net=host don't work anymore.
I researched and I understand why this is not possible (Docker in Mac runs in a own virtual machine) and my momentary solution it's use the -p option for expose the ports, but this limit to me to add more and more task that use ports because i need run the explicit -p command for each port that i need expose.
Anyone with this same problem? How to dealing with this ?
Your issue is most probably that you are using dockertoolbox or dhingy/dlite or anything else providing a full-fledged linux VM, which then hosts docker to run your container inside this VM. This VM has, of course, its own network stack and own IP on the host, and thats were your tools will have issues with. The exposed ports of the container are not exposed to OSX host localhost, but rather OSX Docker-VM-ip.
To solve those issues elegantly
Expose ports to OSX localhost from the container
First, use/install docker-for-mac https://docs.docker.com/engine/installation/mac/ instead of dockertoolbox or others. Its based on a special xhyve stack which reuses your hosts network stack
when you now do docker run -p 3306:3306 percona it will bind 3306 on the osx-host-localhost, thus every other osx-tool trying to attach to localhost:3306 will work ( very useful ) just as you have been used to it when you installed mysql using brew install mysql or likewise
If you experience performance issues with code shares on OSX with docker containers, check http://docker-sync.io - it is compatible with docker-for-mac ( hint: i am biased on this one )
Export ports from the OSX-host to a containter
You do not really export anything in particular, you rather make them accessable as a whole from all containers ( all ports of the OSX-host-localhost)
If you want to attach to a port you offered on the OSX host, from within a container, e.g. during a xdebug session were your IDE listens on port 9000 on the OSX-host-localhost and the container running FPM/PHP should attach to this osx-localhost:9000 on the mac, you need to do this: https://gist.github.com/EugenMayer/3019516e5a3b3a01b6eac88190327e7c
So you create a dummy loopback ip, so you can access your OSX-host ports from without containers using 10.254.254.254:9000 - this is portable and basically gives you all you need to develop like you have used to
So one gives you the connectivity to container-exposed ports to apps running on the mac and trying to connect to localhost:port
And the second the inverse, if something in the container wants to attach to a port on the host.
One workaround, mentioned in "Bind container ports to the host" would be to use -P:
(or --publish-all=true|false) to docker run which is a blanket operation that identifies every port with an EXPOSE line in the image’s Dockerfile or --expose <port> commandline flag and maps it to a host port somewhere within an ephemeral port range.
The docker port command then needs to be used to inspect created mapping.
So if your app can use docker port <CONTAINER> to retrieve the mapped port, you can add as many containers as you want and get the mapped ports that way (without needed an "explicit -p command for each port").
Not sure if docker for mac can support bi-directional connection later https://forums.docker.com/t/will-docker-for-mac-support-bi-directional-connection-between-host-and-container-in-the-future/19871
I have two solution:
you can write a simple wrapper script and pass the port you want to expose to the script
use vagrant to start a virtual machine with network under control.