How can I setup the Kubernetes to work with LoadBalancer on Docker for Windows?
I have this very simple Kubernetes hello world:
kubectl run my-nginx --image=nginx --replicas=1 --port=80
kubectl expose deployment my-nginx --port=80 --type=LoadBalancer
kubectl get svc
kubectl describe service my-nginx
curl -m 10 http://localhost/
curl -m 10 http://localhost:32026/
It does not work, the localhost is not responding. Output I get is:
deployment.apps "my-nginx" created
service "my-nginx" exposed
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx LoadBalancer 10.103.126.2 localhost 80:32026/TCP 0s
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Annotations: <none>
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.103.126.2
LoadBalancer Ingress: localhost
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32026/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
curl: (28) Operation timed out after 10000 milliseconds with 0 bytes received
curl: (28) Operation timed out after 10000 milliseconds with 0 bytes received
the LoadBalancer seems to be there, in external IP but it does not work.
I have tested that I can get inside the pod with kubectl exec pod-name -it -- bash and I can see that nginx is running in the pod. However it's not accessible from Windows.
I have also tested that connection with Docker images does work just fine:
docker run -dit --rm --name nginx -p 80:80 nginx
curl -m 10 http://localhost/
docker stop nginx
This works.
Connection to LoadBalancer in Kubernetes is somehow broken, does it work for others, and any ways to fix this?
Docker for Windows,
Version 2.0.0.3 (31259),
Channel: stable,
Build: 8858db3
The server is listening on the port 32270, try localhost:32270
Related
I'm using Docker Desktop Kubernetes on Windows, and I cannot access a ClusterIP service internally from another pod.
This is what i deployed:
$ k get pods -o wide
my-nginx-deploy-d78b588bf-cffmd 1/1 Running 0 83s 10.1.0.123 docker-desktop
my-nginx-deploy-d78b588bf-m945j 1/1 Running 0 83s 10.1.0.122 docker-desktop
$ k get services -o wide
nginx-service ClusterIP 10.103.79.175 8818/TCP 22m app=nginx
I'm checking internal connectivity running a shell in one of the pods and sending a requesto to the service (by name and by IP) an directly to the pod:
& k exec -ti my-nginx-deploy-d78b588bf-cffmd -- /bin/sh
curl nginx-service:8818
curl: (7) Failed to connect to nginx-service port 8818 after 21031 ms: Connection refused
curl 10.103.79.175:8818
curl: (7) Failed to connect to nginx-service port 8818 after 21031 ms: Connection refused
curl 10.1.0.122
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
</html>
So the pod is running and can be accesed using the pod IP, but the service is rejecting connections.
This is the yaml of the service:
$ k get service nginx-service -o yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
clusterIP: 10.103.79.175
clusterIPs:
10.103.79.175
internalTrafficPolicy: Cluster
ipFamilies:
IPv4
ipFamilyPolicy: SingleStack
ports:
name: nginx-port
port: 8818
protocol: TCP
targetPort: 80
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
What could be the reason? how can I dig into the problem?
I am trying to deploy simple spring boot REST service on minikube (Windows-10). Below are my configuration
Docker file
FROM openjdk:8-jdk-alpine
ENTRYPOINT ["/usr/bin/java", "-jar", "/usr/share/myservice/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar"]
ADD target/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar /usr/share/myservice/lib
ARG JAR_FILE
ADD target/${JAR_FILE} /usr/share/myservice/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar
EXPOSE 8080
docker image is running fine and i am able to run the app.
docker run -p 8080:8080 minikube-spring-boot-demo:0.0.1-SNAPSHO
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: minikube-spring-boot-demo
spec:
selector:
matchLabels:
app: minikube-spring-boot-demo
tier: backend
replicas: 3
template:
metadata:
labels:
app: minikube-spring-boot-demo
tier: backend
spec:
containers:
- name: demo-backend
image: nirajsonawane/minikube-spring-boot-demo:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8080
Service
apiVersion: v1
kind: Service
metadata:
name: minikube-spring-boot-demo-service
spec:
selector:
app: minikube-spring-boot-demo
tier: backend
ports:
- port: 8080
targetPort: 8080
nodePort: 30008
type: NodePort
kubectl get all status
kubectl cluster-info
minikube logs
Service Details
i am not able to access the rest endpoint using service-ip:Nodeport/Uri
http://127.0.0.1:30008/hello
http://172.17.0.2:30008/hello
Anything i am missing here? any inputs will be useful.
output of netstat -a
minikube is running in a virtual machine. Services can't be accessed through either localhost or 127.0.0.1 out of the machine.
Try to run minikube service minikube-spring-boot-demo-service. It will show service details and open the service in the browser.
You can get you cluster ip using below command
kubectl get nodes -o wide
then run below to get nodeport
kubectl get svc -o wide -n <namespace>
get the port of your NodePort Svc
then your application will be running on http://:port(svc Nodeport )
In your case it might be running on
http://127.0.0.1:30008/hello
I'm trying to run the kubernetes guestbook app on Vagrant on my local Mac OS X machine.
I have all the nodes (master and node-1) running, by running the following:
./kubectl.sh create -f ../examples/guestbook/all-in-one/guestbook-all-in-one.yaml
As I'm running locally, I also changed the about yaml file, to use NodePort instead of LoadBalancer
Running the following:
./kubectl.sh describe service frontend returns the following:
Name: frontend
Labels: app=guestbook,tier=frontend
Selector: app=guestbook,tier=frontend
Type: NodePort
IP: 10.247.127.146
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31030/TCP
Endpoints: 10.246.9.12:80,10.246.9.13:80,10.246.9.7:80
Session Affinity: None
No events.
If I try: http://10.247.127.146:31030 It doesn't conenct to the guestbook app.
Is there anything I'm doing wrong?
Thanks
nodePort is the port available on every kubernetes node.
You need to find the IP address of your kubernetes node, and then http://<nodeIP>:31030 should work.
Is there anything special about running ingress controllers on Kubernetes CoreOS Vagrant Multi-Machine? I followed the example but when I run kubectl -f I do not get an address.
Example:
http://kubernetes.io/v1.1/docs/user-guide/ingress.html#single-service-ingress
Setup:
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
I looked at networking in kubernetes. Everything looks like it should run without further configuration.
My goal is to create a local testing environment before I build out a production platform. I'm thinking there's something about how they setup their virtualbox networking. I'm about to dive into the CoreOS cloud config but thought I would ask first.
UPDATE
Yes I'm running an ingress controller.
https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/nginx-alpha/rc.yaml
It runs without giving an error. It's just when I run kubectl -f I do not get an address. I'm thinking there's either two things:
I have to do something extra in networking for CoreOS-Kubernetes vagrant multi-node.
It's running right, but I'm point my localhost to the wrong IP. I'm using a 172.17.4.x ip, I also have 10.0.0.x . I can access services through the 172.17.4.x using a NodePort, but I can get to my Ingress.
Here is the code:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: gcr.io/google_containers/nginx-ingress:0.1
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
hostPort: 80
Update 2
Output of commands:
kubectl get pods
NAME READY STATUS RESTARTS AGE
echoheaders-kkja7 1/1 Running 0 24m
nginx-ingress-2wwnk 1/1 Running 0 25m
kubectl logs nginx-ingress-2wwnk --previous
Pod "nginx-ingress-2wwnk" in namespace "default": previous terminated container "nginx" not found
kubectl exec nginx-ingress-2wwnk -- cat /etc/nginx/nginx.conf
events {
worker_connections 1024;
}
http {
}%
I'm running an echoheaders service on NodePort. When I type the node IP and port on my browser, I get that just fine.
I restarted all nodes in virtualbox too.
With a lot help from kubernetes irc and slack, I fixed this a while back. If I remember correctly, I had the ingress service listening on a port that was already being used, I think for vagrant. These commands really help:
kubectl get pod <nginx-ingress pod> -o json
kubectl exec <nginx-ingress pod> -- cat /etc/nginx/nginx.conf
kubectl get pods -o wide
kubectl logs <nginx-ingress pod> --previous
Instead of working with google cloud, I decided to set up Kuberenetes on my own machine. I made a docker image of my hello-world web server. I set up hello-controller.yaml:
apiversion: v1
kind: ReplicationController
metadata:
name: hello
labels:
name: hello
spec:
replicas: 1
selector:
name: hello
template:
metadata:
labels:
name: hello
spec:
containers:
- name: hello
image: flaggy/hello
ports:
- containerPort: 8888
Now I want to expose the service to the world. I don't think vagrant provider has a load balancer (which seems to be the best way to do it). So I tried with NodePort service type. However, the newly created NodePort does not seem to be listened on any IP I try. Here's hello-service.yaml:
apiversion: v1
kind: Service
metadata:
name: hello
labels:
name: hello
spec:
type: NodePort
selector:
name: hello
ports:
- port: 8888
If I log into my minion I can access port 8888
$ curl 10.246.1.3:8888
Hello!
When I describe my service this is what I get:
$ kubectl.sh describe service/hello
W0628 15:20:45.049822 1245 request.go:302] field selector: v1 - events - involvedObject.name - hello: need to check if this is versioned correctly.
W0628 15:20:45.049874 1245 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0628 15:20:45.049882 1245 request.go:302] field selector: v1 - events - involvedObject.kind - Service: need to check if this is versioned correctly.
W0628 15:20:45.049887 1245 request.go:302] field selector: v1 - events - involvedObject.uid - 2c0005e7-1dc2-11e5-8369-0800279dd272: need to check if this is versioned correctly.
Name: hello
Labels: name=hello
Selector: name=hello
Type: NodePort
IP: 10.247.5.87
Port: <unnamed> 8888/TCP
NodePort: <unnamed> 31423/TCP
Endpoints: 10.246.1.3:8888
Session Affinity: None
No events.
I cannot find anyone listening on port 31423 which, as I gather, should be the external port for my service. I am also puzzled about IP 10.247.5.87.
I note this
$ kubectl.sh get nodes
NAME LABELS STATUS
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready
Why is that IP different from what I see on describe for the service? I tried accessing both IPs on my host:
$ curl 10.245.1.3:31423
curl: (7) Failed to connect to 10.245.1.3 port 31423: Connection refused
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
$
So IP 10.245.1.3 is accessible, although port 31423 is not binded to it. I tried routing 10.247.5.87 to vboxnet1, but it didn't change anything:
$ sudo route add -net 10.247.5.87 netmask 255.255.255.255 vboxnet1
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
If I do sudo netstat -anp | grep 31423 on the minion nothing comes up. Strangely, nothing comes up if I do sudo netstat -anp | grep 8888 either.
There must be either some iptables magic or some interface in promiscuous mode being abused.
Would it be this difficult to get things working on bare metal as well? I haven't tried AWS provider either, but I am getting worried.
A few things.
Your single pod is 10.246.1.3:8888 - that seems to work.
Your service is 10.247.5.87:8888 - that should work as long as you are within your cluster (it's virtual - you will not see it in netstat). This is the first thing to verify.
Your node is 10.245.1.3 and your service should ALSO be on 10.245.1.3:31423 - this is the part that does not seem to be working correctly. Like service IPs, this binding is virtual - it should show up in iptables-save but not netstat. If you log into your node (minion), can you curl localhost:31423 ?
You might find this doc useful: https://github.com/thockin/kubernetes/blob/docs-debug-svcs/docs/debugging-services.md