How to know which nodeport can be allocated from Kubernetes API Server? - go

I want to figure out how does kubernetes knows which nodeport can be allocated when create a new service with nodeport type like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- port: 80
targetPort: 80
I had search google and find these kubernetes soure code, but I don't understand how does it works.
https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/portallocator/allocator.go

The Nodeport is chosen randomly between 30000-32767. You can set it in the service definition.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
From the documentation: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Update
The classes placed in the package kubernetes/pkg/registry/core/service/portallocator are responsible for allocating a node port for a service.
This test documents the behavior: https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/core/service/portallocator/operation_test.go
Kubernetes just takes a random port and if that one isn't free it takes the next one.
If you can read go the other classes in that package are a good starting point to understand the behavior.

Related

Kubernetes timeout

I can't for the life of me get this to connect.
It is a golang application using Kubernetes.
The docker file runs just fine, the pod launches but the connection times out.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30008
selector:
app: ark-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ark-backend
namespace: ark
spec:
replicas: 1
selector:
matchLabels:
app: ark-api
template:
metadata:
labels:
app: ark-api
spec:
imagePullSecrets:
- name: regcred
containers:
- name: ark-api-container
image: xxx
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: web
containerPort: 8080
I am able to boot the docker container just fine and it runs.
Turns out the container gets terminated and I have no idea why.
You could check wheather the port 8080 is listening inside the container
kubectl exec -it <pod_namen> -n <namespace> -- netstat -ntpl
if there is no netstat command in the container, you could try to build a base image with it.
Check whether the port 30080 is listening on the node. Run the following command on the node
netstat -ntpl | grep 30080
Also you could try not to specify the node port in the service yaml, let the kubernetes to choose the nodeport for you. That could avoid to specify the port which is already using in your node.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: ClusterIP
selector:
component: ark-api
ports:
- port: 80
targetPort: 8080
Try using clusterIP instead of nodeport, if you are using any kind of Ingress then you have to create rules in your ingress config so It can expose your service to the outside web via your load balancer.
I deleted the service and used port forwarding and was able to boot everything. I'll have to circle back to the service to try and figure it out.

Unable to expose docker LoadBalancer service

I am trying to deploy a docker image which is in public repository. I am trying to create a loadbalancer service, and trying to expose the service in my system ip address, and not 127.0.0.1.
I am using a windows 10 , and my docker has WSL2 instead of hyper-v.
Below is my .yaml file. So, the service inside will run in port 4200, so to avoid any kind of confusion I was keeping all the ports in 4200.
apiVersion: v1
kind: Service
metadata:
name: hoopla
spec:
selector:
app: hoopla
ports:
- protocol: TCP
port: 4200
targetPort: 4200
clusterIP: 10.96.1.3
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.0.144
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hoopla
name: hoopla
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hoopla
template:
metadata:
labels:
app.kubernetes.io/name: hoopla
spec:
containers:
- image: pubrepo/myimg:latest
name: hoopla
ports:
- containerPort: 4200
Can anybody help me here to understand what mistake I am making. I basically want to expose this on my system IP address.
The loadBalancer service type require a cloud provider's load Balancer ( https://kubernetes.io/docs/concepts/services-networking/service/ )
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created
If you want to expose your service to your local machine, use kubernetes service nodePort type for example, and if you just want to test your webapp, you can use the kubernetes service clusterIp type and make a port-forward, for example with your clusterIp service:
kubectl port-forward svc/hoopla 4200:4200

Posting a json message to a webserver inside a pod

I am new to kubernetes.
I have implemented a webserver inside a pod and set a Nodeport service for that pod.
I want to send a POST request with a custom message (in json) to a pod after it has been created and ready to use. I want to use the go client library for that matter. Could you please let me know how I can do that?
Which part of the library come to help?
Thanks.
Say the go server runs on locally, you normally use http://localhost:3000 to access it. The pod then has a containerPort of 3000.
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-web-deployment
labels:
app: GoWeb
spec:
replicas: 1
selector:
matchLabels:
app: GoWeb
template:
metadata:
labels:
app: GoWeb
spec:
containers:
- name: go-web
image: me/go-web:1.0.1
ports:
- containerPort: 3000
The Service is then an abstraction of that pod, that describes how to access 1 or many Pods running that service.
The nodePort of the service is 31024.
apiVersion: v1
kind: Service
metadata:
name: go-web-service
spec:
type: NodePort
selector:
app: GoWeb
ports:
- port: 3000
nodePort: 31024
The application is published on http://node-ip:node-port for the public to consume. Kubernetes manages the mappings between the node and the container in the background.
| User | -> | Node:nodePort | -> | Pod:containerPort |
The Kubernetes internal Service and Pod IP's are not often available to the outside world (unless you specifically set a cluster up that way). Whereas the nodes themselves will often carry an IP address that is routable/contactable.

Not able to call external resources through kubernetes ingress

I am trying to configure ingress resources in kubernetes, I want to know if I can access external resources via kuberntes(Example, I installed kibana in a virtual machine and I want to access through kubernetes ingress as below)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- host: test.com
http:
paths:
- path: "/"
backend:
serviceName: service1
servicePort: 1000
- path: "/test"
backend:
serviceName: service2.test
servicePort: 2000
- path: "/kibana"
backend:
serviceName: <ip-address>
servicePort: 9092
Any suggested is this the right way of calling external resources(or) we cannot initiate a call as it is outside of kubernetes...
I am trying to call as test.com/kibana
Please suggest.
For external resources you should create Endpoints object.
This is explained with Services without selectors
Services most commonly abstract access to Kubernetes Pods, but they can also abstract other kinds of backends. For example:
You want to have an external database cluster in production, but in your test environment you use your own databases.
You want to point your Service to a Service in a different Namespace or on another cluster.
You are migrating a workload to Kubernetes. Whilst evaluating the approach, you run only a proportion of your backends in Kubernetes.
In any of these scenarios you can define a Service without a Pod selector. For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
Because this Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
So once you add the Endpoint setup a Service for it, you will be able to use is inside Ingress.

Running socket.io in Google Container Engine with multiple pods fails

I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.
I setup the load balancer to use sticky sessions based on IP.
Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?
How can I set it up to ensure session affinity to a particular POD in the cluster?
NOTE: I manually set the sessionAffinity on the cloud load balancer.
Here would be my ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):
Here is an example Ingress spec:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.
If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>
Hope this helps, good luck!

Resources