I'm trying to run the kubernetes guestbook app on Vagrant on my local Mac OS X machine.
I have all the nodes (master and node-1) running, by running the following:
./kubectl.sh create -f ../examples/guestbook/all-in-one/guestbook-all-in-one.yaml
As I'm running locally, I also changed the about yaml file, to use NodePort instead of LoadBalancer
Running the following:
./kubectl.sh describe service frontend returns the following:
Name: frontend
Labels: app=guestbook,tier=frontend
Selector: app=guestbook,tier=frontend
Type: NodePort
IP: 10.247.127.146
Port: <unnamed> 80/TCP
NodePort: <unnamed> 31030/TCP
Endpoints: 10.246.9.12:80,10.246.9.13:80,10.246.9.7:80
Session Affinity: None
No events.
If I try: http://10.247.127.146:31030 It doesn't conenct to the guestbook app.
Is there anything I'm doing wrong?
Thanks
nodePort is the port available on every kubernetes node.
You need to find the IP address of your kubernetes node, and then http://<nodeIP>:31030 should work.
Related
I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/
I deployed my containerized application to google kubernetes engine using Ansible.
I created a pod for the application with using a deployment, I also specified containerPort as 8080. This seems to be working fine.
- name: Create k8s pod for nginx
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Deployment
metadata:
name: "{{ app }}"
namespace: "{{ namespace }}"
labels:
app: "{{ app }}"
spec:
replicas: 1
selector:
matchLabels:
app: "{{ app }}"
template:
metadata:
labels:
app: "{{ app }}"
spec:
containers:
- name: hello-dashboard
image: "{{ image_name }}"
# This app listens on port 8080 for web traffic by default.
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
Tracking the deployment
kubectl get deployments --namespace=nginx
shows the deployment is READY and AVAILABLE
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 34m
checking the pods created by the deployment
kubectl get pods --namespace=nginx
this also shows the pod was creates
NAME READY STATUS RESTARTS AGE
nginx-cb894bfc5-trnrk 1/1 Running 0 33m
Now, when i check for the LoadBalancer service
kubectl get services --namespace=nginx
The service was also created and assigned an external-ip
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.99.240.181 35.242.130.109 80:31005/TCP 33m
But the problem is I can't access the deployed application using the external-ip from the LoadBalancer, the browser tells me it cannot be reached.
Most likely this is an issue with your Kubernetes Service or Deployment. GKE will automatically provision the firewall rules required for the ports mapped to the Service resource.
Ensure that you have exposed the correct port on your Service and mapped it to a valid port on your Deployment's Pods. Also note, that the firewall required is for port 31005 (the nodePort) since this is the one that is accepting traffic from the load balancer.
Ensure you allow incoming traffic as follows :
From the internet to the load balancer on TCP port 8080.
From the load balancer to all Kubernetes nodes on TCP port 31005.
I think there is some mismatch in ports like an application running port_no inside the containers with the service(service.yaml) port and target port configuration
I have a minikube cluster running on Mac OSX and a simple Spring Boot REST api that connects to Redis and Mongo DB, which I have installed and running locally.
I wish not to run Redis / MongoDb in a Docker container.
I will probably run them remotely in the cloud, therefore I would probably just connect to an external IP address.
What I don't understand is what IP address I should use to connect to my localhost machine.
I start up my Minikube with VM hyperkit.
Edit:
I also tried to start using virtualbox:
minikube start --vm-driver=virtualbox
In my spring boot application, I've configured:
spring.data.mongodb.host = 10.0.2.2
spring.redis.host = 10.0.2.2
But still, I get connection errors:
This works when I run the application locally on my host machine.
For the sake of completeness, this is my yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api
labels:
app: posts-api
env: dev
spec:
type: NodePort
selector:
app: posts-api
ports:
- protocol: TCP
port: 8083
name: http
---
apiVersion: v1
kind: ReplicationController
metadata:
name: posts-api
spec:
replicas: 1
template:
metadata:
labels:
app: posts-api
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 3
I'll give you the answer I gave to someone with the same problem (different tech):
Kubernetes pod unable to connect to rabbit mq instance running locally
Replace the IP and port number, and the Service and Endpoints names as appropriate.
Objective: create a k8s LoadBalancer service on AWS whose IP is static
I have no problem accomplishing this on GKE by pre-allocating a static IP and passing it in via loadBalancerIP attribute:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: dave
loadBalancerIP: 17.18.19.20
...etc...
But doing same in AWS results in externalIP stuck as <pending> and an error in the Events history
Removing the loadBalancerIP value allows k8s to spin up a Classic LB:
$ kubectl describe svc dave
Type: LoadBalancer
IP: 100.66.51.123
LoadBalancer Ingress: ade4d764eb6d511e7b27a06dfab75bc7-1387147973.us-west-2.elb.amazonaws.com
...etc...
but AWS explicitly warns me that the IPs are ephemeral (there's sometimes 2), and Classic IPs don't seem to support attaching static IPs
Thanks for your time
as noted by #Quentin, AWS Network Load Balancer now supports K8s
https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/
Network Load Balancing in Kubernetes
Included in the release of Kubernetes 1.9, I added support for using the new Network Load Balancer with Kubernetes services. This is an alpha-level feature, and as of today is not ready for production clusters or workloads, so make sure you also read the documentation on NLB before trying it out. The only requirement to expose a service via NLB is to add the annotation service.beta.kubernetes.io/aws-load-balancer-type with the value of nlb.
A full example looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
Instead of working with google cloud, I decided to set up Kuberenetes on my own machine. I made a docker image of my hello-world web server. I set up hello-controller.yaml:
apiversion: v1
kind: ReplicationController
metadata:
name: hello
labels:
name: hello
spec:
replicas: 1
selector:
name: hello
template:
metadata:
labels:
name: hello
spec:
containers:
- name: hello
image: flaggy/hello
ports:
- containerPort: 8888
Now I want to expose the service to the world. I don't think vagrant provider has a load balancer (which seems to be the best way to do it). So I tried with NodePort service type. However, the newly created NodePort does not seem to be listened on any IP I try. Here's hello-service.yaml:
apiversion: v1
kind: Service
metadata:
name: hello
labels:
name: hello
spec:
type: NodePort
selector:
name: hello
ports:
- port: 8888
If I log into my minion I can access port 8888
$ curl 10.246.1.3:8888
Hello!
When I describe my service this is what I get:
$ kubectl.sh describe service/hello
W0628 15:20:45.049822 1245 request.go:302] field selector: v1 - events - involvedObject.name - hello: need to check if this is versioned correctly.
W0628 15:20:45.049874 1245 request.go:302] field selector: v1 - events - involvedObject.namespace - default: need to check if this is versioned correctly.
W0628 15:20:45.049882 1245 request.go:302] field selector: v1 - events - involvedObject.kind - Service: need to check if this is versioned correctly.
W0628 15:20:45.049887 1245 request.go:302] field selector: v1 - events - involvedObject.uid - 2c0005e7-1dc2-11e5-8369-0800279dd272: need to check if this is versioned correctly.
Name: hello
Labels: name=hello
Selector: name=hello
Type: NodePort
IP: 10.247.5.87
Port: <unnamed> 8888/TCP
NodePort: <unnamed> 31423/TCP
Endpoints: 10.246.1.3:8888
Session Affinity: None
No events.
I cannot find anyone listening on port 31423 which, as I gather, should be the external port for my service. I am also puzzled about IP 10.247.5.87.
I note this
$ kubectl.sh get nodes
NAME LABELS STATUS
10.245.1.3 kubernetes.io/hostname=10.245.1.3 Ready
Why is that IP different from what I see on describe for the service? I tried accessing both IPs on my host:
$ curl 10.245.1.3:31423
curl: (7) Failed to connect to 10.245.1.3 port 31423: Connection refused
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
$
So IP 10.245.1.3 is accessible, although port 31423 is not binded to it. I tried routing 10.247.5.87 to vboxnet1, but it didn't change anything:
$ sudo route add -net 10.247.5.87 netmask 255.255.255.255 vboxnet1
$ curl 10.247.5.87:31423
curl: (7) Failed to connect to 10.247.5.87 port 31423: No route to host
If I do sudo netstat -anp | grep 31423 on the minion nothing comes up. Strangely, nothing comes up if I do sudo netstat -anp | grep 8888 either.
There must be either some iptables magic or some interface in promiscuous mode being abused.
Would it be this difficult to get things working on bare metal as well? I haven't tried AWS provider either, but I am getting worried.
A few things.
Your single pod is 10.246.1.3:8888 - that seems to work.
Your service is 10.247.5.87:8888 - that should work as long as you are within your cluster (it's virtual - you will not see it in netstat). This is the first thing to verify.
Your node is 10.245.1.3 and your service should ALSO be on 10.245.1.3:31423 - this is the part that does not seem to be working correctly. Like service IPs, this binding is virtual - it should show up in iptables-save but not netstat. If you log into your node (minion), can you curl localhost:31423 ?
You might find this doc useful: https://github.com/thockin/kubernetes/blob/docs-debug-svcs/docs/debugging-services.md