For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586
I am trying to use $ docker-compose up -d for a project and am getting this error message:
ERROR: for api Cannot start service api: driver failed programming external connectivity on endpoint dataexploration_api_1 (8781c95937a0a4b0b8da233376f71d2fc135f46aad011401c019eb3d14a0b117): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9000:tcp:172.19.0.2:80: input/output error
Encountered errors while bringing up the project.
I am wondering if it is maybe the port? I had been trying port 8080 previously. The project was originally set up on a mac and I have cloned the repository from gitHub.
I got the same error message on my Windows 10 Pro / Docker v17.06.1-ce-win24 / Docker-Compose v1.14.0 using Windows Powershell x86 in admin mode.
The solution was to simply restart Docker.
If happens once, restarting Docker will do the work. In my case, it was happening every time that I restarted my computer.
In this case, disable Fast Startup, or you probably will restart Docker every time that your computer starts. This solution was obtained from here
Simply restaring Docker didn't fix the problem for me on Windows 10.
In my case, I resolved the problem with the exact steps below:
1) Close "Docker Desktop"
2) Run the commands below:
net stop com.docker.service
net start com.docker.service
3) Launch "Docker Desktop" again
Hope this will help someone else.
I got that error too, if you want to know the main reason why error happens, its because docker is already running a similar container, to resolve the problem( avoid restarting Docker), you must:
docker container ls
You got something similar to:
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
This is a list of the running containers, take the CONTAINER ID (copy Ctrl+C)
Now you have to end the process (and let run another image) run this command.
docker container stop <CONTAINER_ID>
And thats all! Now you can create the container.
For more information, visit https://docs.docker.com/get-started/part2/
Normally this error happens when you are trying start a container but the ports that the container needs are occuppied, usually by the same Docker like a result of an latest bad process to stop.
For me the solution is:
Open windows CMD like administrator, type netstat -oan to find the process (Docker is the that process) that is occuppying your port:
In my case my docker ports are 3306 6001 8000 9001.
Now we need free that ports, so we go to kill this process by PID (colum PID), type
TASKKILL /PID 9816 /F
Restart docker.
Be happy.
Regards.
I am aware there are already a lot answers, but none of them solved the problem for me. Instead, I got rid of this error message by resetting docker to factory defaults:
In my case, the problem was that the docker container (Nginx) uses 80 port, and IIS uses the same. Setting up another port in IIS solve problem
In most case, the first case you should think about is there is an old service running and using that port.
In my case, since I change the image name, then when using docker-compose to stop (then up), it won't stop old container (service), lead to the new container can not be started.
A bit of a late answer but I will leave it here it might help someone else
On a Mac mojave after a lot of restarts of both mac and docker
I had to sudo apachectl stop.
The simplest way to solve this is Restarting the docker. But in some cases it might not work even though you don't have any containers that are running in the port.You can check the running containers using docker ps command and can see all the containers that were not cleared but exited before using docker ps -a command.
Imagine there is a container which has the container id 8e35276e845e.You can use the command docker rm 8e35276e845e or docker rm 8e3 to end the container.Note that the first 3 strings are the id of that particular docker container id. Thus according to the above scenario 8e3 is the id of 8e35276e845e.
If restarting doesn't work you can try changing the ports of the services in the docker-compose.yml file and according to the apache port you have to change the port of the v-host(if there's any).This will resolve your problem.
Ex:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8082:80
depends_on:
- mysql_db
should be changed into
apache_server:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8083:80
depends_on:
- mysql_db
and the particular v-host also has to be changed,
Ex(according to the above scenario):
<VirtualHost *:80>
ProxyPreserveHost On
ServerAlias phpadocker.lk
ProxyPass / http://localhost:8083/
ProxyPassReverse / http://localhost:8083/
</VirtualHost>
This will help you to solve the above problem.
For many windows users out there that have the same issue I would suggest to restart the computer also, because most of the times (for me at least) restarting just Docker doesn't work. So, I would suggest you follow the following steps:
Restart your pc.
Then Start up your PowerShell as admin and run this:
Set-NetConnectionProfile -interfacealias "vEthernet (DockerNAT)" -NetworkCategory Private
After that restart your Docker.
After completing these steps you will be able to run without problems.
I hope that helps.
I have a docker image that runs a webserver and I would like to access it from my local OSX, but I'm having issues.
I start the container with: docker run -p 8000:8000 <container-name>
and I can see log messages telling me that the local server is listening on localhost:8000
I am able to get a successful response from running:
docker exec <IMAGE-ID> curl "http://localhost:8000/"
Addresses I've tried on my local OSX are:
http://localhost:8000/
http://<DOCKER-IP-172.17.0.2:8000/
Neither of those work. Any suggestions?
Container is built from golang:1.8
Docker Version: Version 17.03.1-ce-mac5 (16048)
MacOS Sierra: 10.12.4
Firewall is turned off for testing purposes
I've tried the same process on Ubuntu 16.04 and no luck their either.
The newer versions of docker use vpnkit on OSX to manage the port forwarding to the containers... you should allow vpnkit through your firewall if you want to expose the container ports.
Also, in your Go code, make sure to bind to 0.0.0.0 rather than 127.0.0.1 for your webserver code.
Problem: Network is not routed to the host machine.
e.g.:
docker run -tip 80:8080 httpd
does NOT result in apache responding on localhost:8080 on the host machine or on docker.local:8080 or anything like that. If I try to connect from inside, the container works fine:
docker run -ti debian
curl 172.17.0.2
<html><body><h1>It works!</h1></body></html>
It seems that on the Docker side itself is everything just fine.
On docker ps you get: ... 80/tcp, 0.0.0.0:80->8080/tcp ...
Environment: New, clean OS installation - OSX Sierra 10.12.2, Docker.app Version 1.13.0 stable (plus 1.13.0. beta and 1.12.0 beta tried as well with same results).
Assumption: There is something broken in between Docker and OS. I guess that this 'something' is Hyperkit (which is like a black box for me). There might be some settings broken by build script from here: http://bigchaindb-examples.readthedocs.io/en/latest/install.html#the-docker-way which is docker-machine centric, which fact I've probably underestimated. Funny fact is also that this was a new install: this build script was the first thing I've done on it -- I don't know if the networking actually worked before.
Question: How do I diagnose this stuff. I would like to be able to trace where exactly the traffic gets lost and fix it accordingly.
Your command line has the ports reversed:
docker run -tip 8080:80 httpd
That's the host port first, with an optional interface to bind, followed by the container port. You can also see that in the docker ps output where port 80 on the host is mapped to port 8080 inside the container.
The other problem some have is the service inside the container needs to listen on all container interfaces (0.0.0.0), not the localhost interface of the container, otherwise the proxy can't forward traffic to it. However, the default settings from official images won't have this issue and your curl command shows that doesn't apply to you.
I got a weird issue with opening 9200 port on gce. After:
Run VM in compute engine (Ubuntu 16.04) - yes, I know CentOS...not yet :-)
Install elasticsearch
gcloud compute --project realty4-1384 firewall-rules create allow-elasticsearch --allow TCP:9200 --target-tags elasticsearch
but sad Dinosaur saying that connection refused.....
curl localhost:9200 - works
nginx, varnish works in the same condition.
I suspect something with rights maybe somebody can give me a hint.
THANK YOU
It was a huge torture for me, I tried to build elasticsearch to docker container and used kubernetes like orchetrator, all works perfectly
until I start getting traffic. My aggregations tear everything apart.
So I have to find a way, spent a day with nginx nothing. Finally,
haproxy did worked for me:
sudo apt-get install haproxy
sudo vim sudo vim /etc/haproxy/haproxy.cfg
add after default section
listen elastic
bind 0.0.0.0:9500
mode http
option forwardfor
server elastic 127.0.0.1:9200 check
Make sure to open 9500 with tcp and IT DOES WORK!