How to configure a GKE load balancer for a golang tcp server? - go

After deploying a golang server container and gke load balancer I can successfully connect to the external ip of the load balancer, but no data reaches the server container.
It works as expected when I run the server container locally and point the client at localhost. I changed it to serve http requests and it worked fine with the same kubernetes manifests. However, if I try to serve both tcp and http (on different ports) then neither work on gke but again works fine locally. So I suspect it probably has something to do with either how I configured the load balancer or how I'm listening for tcp connections in the server breaks something when running on gke but not locally.
K8s Service Manifest
apiVersion: v1
kind: Service
metadata:
name: steel-server-service
spec:
type: LoadBalancer
selector:
app: steel-server
ports:
- protocol: TCP
name: tcp
port: 12345
targetPort: 12345
K8s Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: steel-server-deployment
labels:
app: steel-server
spec:
replicas: 1
selector:
matchLabels:
app: steel-server
template:
metadata:
labels:
app: steel-server
spec:
containers:
- name: steel-server
image: gcr.io/<my-project-id>/steel-server:latest
ports:
- containerPort: 12345
name: tcp
Relevent Go TCP Server Code
server, err := net.Listen("tcp", ":12345")
if err != nil {
log.Fatalln("Couldn't start up tcp server: ", err)
}

Can you try first with kubectl get svc so you get to know which ports are open with load balancer as you got external ip from GCP as type:loadbalancer.
apiVersion: v1
kind: Service
metadata:
name: steel-server-service
spec:
type: LoadBalancer
selector:
app: steel-server
ports:
- protocol: TCP
name: tcp
port: 80
targetPort: 12345
Try with this service config i change port to 80 while target port of same as container port.

Related

Unable to open service externally on Rancher

Issue with rancher opening external port
Installed rancher 2.6 ,
deployed a springboot app with 8080 port open docker image
Set internal port as 8080, external(nodeport) as 31000 and port on image container as 8080
Trying to access :31000 , getting 404 please help
i'm stuck here!!! Need help
I tried changing the node port, disabling the firewall and restarting machine
The Fact you are getting a 404 is that something is answering your request.
I would do the following to try to troubleshoot.
Use port-forward to check your app is running on the cluster fine:
kubectl port-forward svc/movies-db-np 8080
Check it on http://localhost:8080
Verify the service has the correct selector for your pod/deployment e.g:
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: NodePort
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
nodePort: 31000
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: my.docker/myapp:1.0.0
name: myapp
ports:
- containerPort: 8080
name: http
protocol: TCP
...
Note here, the service will match pods with label app=myapp.

Kubernetes timeout

I can't for the life of me get this to connect.
It is a golang application using Kubernetes.
The docker file runs just fine, the pod launches but the connection times out.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30008
selector:
app: ark-api
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ark-backend
namespace: ark
spec:
replicas: 1
selector:
matchLabels:
app: ark-api
template:
metadata:
labels:
app: ark-api
spec:
imagePullSecrets:
- name: regcred
containers:
- name: ark-api-container
image: xxx
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: web
containerPort: 8080
I am able to boot the docker container just fine and it runs.
Turns out the container gets terminated and I have no idea why.
You could check wheather the port 8080 is listening inside the container
kubectl exec -it <pod_namen> -n <namespace> -- netstat -ntpl
if there is no netstat command in the container, you could try to build a base image with it.
Check whether the port 30080 is listening on the node. Run the following command on the node
netstat -ntpl | grep 30080
Also you could try not to specify the node port in the service yaml, let the kubernetes to choose the nodeport for you. That could avoid to specify the port which is already using in your node.
apiVersion: v1
kind: Service
metadata:
name: ark-service
namespace: ark
spec:
type: ClusterIP
selector:
component: ark-api
ports:
- port: 80
targetPort: 8080
Try using clusterIP instead of nodeport, if you are using any kind of Ingress then you have to create rules in your ingress config so It can expose your service to the outside web via your load balancer.
I deleted the service and used port forwarding and was able to boot everything. I'll have to circle back to the service to try and figure it out.

Unable to expose docker LoadBalancer service

I am trying to deploy a docker image which is in public repository. I am trying to create a loadbalancer service, and trying to expose the service in my system ip address, and not 127.0.0.1.
I am using a windows 10 , and my docker has WSL2 instead of hyper-v.
Below is my .yaml file. So, the service inside will run in port 4200, so to avoid any kind of confusion I was keeping all the ports in 4200.
apiVersion: v1
kind: Service
metadata:
name: hoopla
spec:
selector:
app: hoopla
ports:
- protocol: TCP
port: 4200
targetPort: 4200
clusterIP: 10.96.1.3
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.168.0.144
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: hoopla
name: hoopla
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: hoopla
template:
metadata:
labels:
app.kubernetes.io/name: hoopla
spec:
containers:
- image: pubrepo/myimg:latest
name: hoopla
ports:
- containerPort: 4200
Can anybody help me here to understand what mistake I am making. I basically want to expose this on my system IP address.
The loadBalancer service type require a cloud provider's load Balancer ( https://kubernetes.io/docs/concepts/services-networking/service/ )
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created
If you want to expose your service to your local machine, use kubernetes service nodePort type for example, and if you just want to test your webapp, you can use the kubernetes service clusterIp type and make a port-forward, for example with your clusterIp service:
kubectl port-forward svc/hoopla 4200:4200

Access kubernetes service externally

I'm following the spring and kubernetes integration tutorial:
https://spring.io/guides/gs/spring-boot-kubernetes/
In my current scenario, I have 1 master and 2 workers servers.
When I deploy the file below using the command kubectl apply -f deployment.yaml, I can make a request from within the master server using kubectl port-forward svc/demo 8080:8080 and curl localhost:8080/actuator/health.
What I want to do is an external request (a public computer - my computer) to access the service that I created (kubernetes_master_ip:8080/actuator), but when I try this, I get "connection refused".
What is missing?
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: demo
spec:
containers:
- image: springguides/demo
name: demo
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: ClusterIP
status:
loadBalancer: {}
You need to change the type of service to expose the application. There are two ways:
- LoadBalancer type: (Only on cloud providers)
- NodePort type: Can be done on-premise or minikube.
Change your service yaml to below:
apiVersion: v1
kind: Service
metadata:
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
nodePort: 31234
protocol: TCP
targetPort: 8080
selector:
app: demo
type: NodePort
Once the service is executed. check the application Node IP on which container is created.
kubectl get pods -o wide
then try to access the application at:
http://node_ip:31234/actuator
you can change your service type to load balancer. Which will expose your service via IP address to the external internet. Service type load balancer will only work with Cloud providers.
For more details you can visit : https://kubernetes.io/docs/concepts/services-networking/
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: demo
name: demo
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: demo
type: LoadBalancer
save as yaml and execute it. it will provide the IP address.
You can access service via IP
Kubectl get svc

Minikube VM hyperkit: Spring Boot: connect to local machine

I have a minikube cluster running on Mac OSX and a simple Spring Boot REST api that connects to Redis and Mongo DB, which I have installed and running locally.
I wish not to run Redis / MongoDb in a Docker container.
I will probably run them remotely in the cloud, therefore I would probably just connect to an external IP address.
What I don't understand is what IP address I should use to connect to my localhost machine.
I start up my Minikube with VM hyperkit.
Edit:
I also tried to start using virtualbox:
minikube start --vm-driver=virtualbox
In my spring boot application, I've configured:
spring.data.mongodb.host = 10.0.2.2
spring.redis.host = 10.0.2.2
But still, I get connection errors:
This works when I run the application locally on my host machine.
For the sake of completeness, this is my yaml file:
---
apiVersion: v1
kind: Service
metadata:
name: posts-api
labels:
app: posts-api
env: dev
spec:
type: NodePort
selector:
app: posts-api
ports:
- protocol: TCP
port: 8083
name: http
---
apiVersion: v1
kind: ReplicationController
metadata:
name: posts-api
spec:
replicas: 1
template:
metadata:
labels:
app: posts-api
spec:
containers:
- name: posts-api
image: kimgysen/posts-api:latest
ports:
- containerPort: 8083
livenessProbe:
httpGet:
path: /health
port: 8083
initialDelaySeconds: 120
timeoutSeconds: 3
I'll give you the answer I gave to someone with the same problem (different tech):
Kubernetes pod unable to connect to rabbit mq instance running locally
Replace the IP and port number, and the Service and Endpoints names as appropriate.

Resources