Load balancing K8s Pods with operator-framework - go

I built a simple operator, by tweaking the memcached example. The only major difference is that I need two docker images in my pods. Got the deployment running. My test.yaml used to deploy with kubectl.
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "solar-demo"
spec:
size: 3
group: cache.example.com
names:
kind: Memcached
listKind: MemcachedList
plural: solar-demos
singular: solar-demo
scope: Namespaced
version: v1alpha1
I am still missing one piece though - load-balancing part. Currently, under Docker we are using the nginx image working as a reverse-proxy configured as:
upstream api_microservice {
server api:3000;
}
upstream solar-svc_microservice {
server solar-svc:3001;
}
server {
listen $NGINX_PORT default;
location /city {
proxy_pass http://api_microservice;
}
location /solar {
proxy_pass http://solar-svc_microservice;
}
root /html;
location / {
try_files /$uri /$uri/index.html /$uri.html /index.html=404;
}
}
I want my cluster to expose the port 8080 and forward to ports 3000 and 3001 to my images running inside Pods.
My deployment:
dep := &appsv1.Deployment{
TypeMeta: metav1.TypeMeta{
APIVersion: "apps/v1",
Kind: "Deployment",
},
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: v1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Image: "shmukler/docker_solar-svc",
Name: "solar-svc",
Command: []string{"npm", "run", "start-solar-svc"},
Ports: []v1.ContainerPort{{
ContainerPort: 3001,
Name: "solar-svc",
}},
},
{
Image: "shmukler/docker_solar-api",
Name: "api",
Command: []string{"npm", "run", "start-api"},
Ports: []v1.ContainerPort{{
ContainerPort: 3000,
Name: "solar-api",
}},
},
},
},
},
}
What do I need to add have ingress or something running in front of my pods?
Thank you

What do I need to add have ingress or something running in front of my pods?
Yes, Ingress is designed for that kind of tasks.
Ingress has a path-based routing, which will be able to set up the same configuration as you mentioned in your example with Nginx. Moreover, one of the most popular implementations of Ingress is Nginx as a proxy.
Ingress is basically a set of rules that allows traffic, otherwise dropped or forwarded elsewhere, to reach the cluster services.
Here is an example of an Ingress configuration:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app
spec:
rules:
- host: '' # Empty value means ‘any host’
http:
paths:
- path: /city
backend:
serviceName: myapp
servicePort: 3000
- path: /solar
backend:
serviceName: myapp
servicePort: 3001
Also, because a Pod is not a static thing, you should create a Service object which will be a static entry point of your application for Ingress.
Here is an example of the Service:
kind: Service
apiVersion: v1
metadata:
name: myapp
spec:
selector:
app: "NAME_OF_YOUR_DEPLOYMENT"
ports:
- name: city
protocol: TCP
port: 3000
targetPort: 3000
- name: solar
protocol: TCP
port: 3001
targetPort: 3001

Related

Is it possible to have a single ingress resource for all mulesoft applications in RTF in Self Managed Kubernetes on AWS?

Can we have a single ingress resource for deployment of all mulesoft applications in RTF in Self Managed Kubernetes on AWS?
Ingress template:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: temp1-svc
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: temp2-svc
port:
number: 80
temp1-svc:
apiVersion: v1
kind: Service
metadata:
name: temp1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp1-svc
temp2-svc:
apiVersion: v1
kind: Service
metadata:
name: temp2-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp2-svc
I am new to RTF, any changes to be done in Ingress resource or do we need to have separate ingress resource for each application? Any help would be appreciated.
Thanks
Generally managing different, ingress if good option.
You can also use the single ingress routing and forwarding traffic across the cluster.
Single ingress for all services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service
port:
number: 80
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service-2
port:
number: 80
the benefit of multiple ingress resources or separate ingress is that you can keep and configure the different annotations to your ingress.
In one, you want to enable CORS while in another you want to change proxy body head etc. So it's better to manage ingress for each microservice.

Kubernetes logs don't print requests output when I use a port in an address

I've created a cluster with minikube
minikube start
Applied this yaml manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway-deployment
spec:
selector:
matchLabels:
app: gateway
replicas: 1
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: docker_gateway
imagePullPolicy: Never
ports:
- containerPort: 4001
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
port: 4001
And my GO app in the container docker_gateway is just a gin http server with one route
package main
import "github.com/gin-gonic/gin"
func main() {
r := gin.Default()
r.GET("/hello", func(c *gin.Context) {
c.JSON(200, gin.H{
"message": "hello",
})
})
server = &http.Server{
Addr: ":4001",
Handler: r,
}
server.ListenAndServe()
}
In Postman I make requests to 192.168.252.130:4001/hello and get a responses
But Kubernetes Pod's logs in Kubernetes don't print those requests. I expect to get this:
[GIN] 2019/10/25 - 14:17:20 | 200 | 1.115µs | 192.168.252.1| GET /hello
But an interesting thing is when I add Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: gateway
servicePort: 4001
I am able to make requests to 192.168.252.130/hello and 192.168.252.130:4001/hello
And without the port Pod's logs print requests, but with the port - they don't.
[GIN] 2019/10/25 - 14:19:13 | 200 | 2.433µs | 192.168.252.1| GET /hello
It's because you cannot access a kubernetes service of ClusterIP type from outside(in your case, outside of minikube) of the cluster.
Learn more about service types here
To access your service from outside, change your service to NodePort type.
Something like:
apiVersion: v1
kind: Service
metadata:
name: gateway
spec:
selector:
app: gateway
ports:
- protocol: TCP
nodePort: 30036
port: 4001
type: NodePort
Then you will be able to access it at http://<minikube-ip>:30036/

Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure

I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
tier: backend
track: stable
spec:
containers:
- name: backend
image: "gcr.io/finacials/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
spec:
ports:
- port: 8081
#targetPort: 8081
#protocol: TCP
name: http
selector:
app: revenue-serv
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /revenue/.*
backend:
serviceName: revenue-serv
servicePort: 8081
Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked.
deployment & service:
#########################################################################################
# This is for deployment - Service & Deployment in Kubernetes ################
# Author: Arindam Banerjee ################
#########################################################################################
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue-serv
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
version: v1
spec:
containers:
- name: revenue-serv
image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
namespace: dev
spec:
ports:
- port: 8081
name: http
selector:
app: revenue-serv
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: worldcup-serv-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: revenue-serv-virtualservice
namespace: dev
spec:
hosts:
- "*"
gateways:
- revenue-serv-gateway
http:
- route:
- destination:
host: revenue-serv

Deploy Rest + gRPC server deploy to k8s with ingress

I have used a sample gRPC HelloWorld application https://github.com/grpc/grpc-go/tree/master/examples/helloworld. This example is running smoothly in local system.
I want to deploy it to kubernetes with use of Ingress.
Below are my config files.
service.yaml - as NodePort
apiVersion: v1
kind: Service
metadata:
name: grpc-scratch
labels:
run: grpc-scratch
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50051
protocol: TCP
targetPort: 50051
selector:
run: example-grpc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc
servicePort: 50051
I am unable to make gRPC request to the server with url xyz.com/grpc. Getting the error
{
"error": "14 UNAVAILABLE: Name resolution failure"
}
If I make request to xyz.com the error is
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Any help would be appreciated.
A backend of the ingress object is a combination of service and port names
In your case you have serviceName: grpc as a backend while your service's actual name is name: grpc-scratch
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc-scratch
servicePort: grpc

Ingress controller is not routing based on path in openshift

I am trying to configure ingress controller in openshift for one of my requirement. I need to route requests to different pods based on path. Found Ingress controller is suitable for my requirement. I have two services created and a ingress which routes to one of these services based on path. Here is my configuration. My app is in spring boot.
apiVersion: v1beta3
kind: List
items:
-
apiVersion: v1
kind: Service
metadata:
name: data-service-1
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7555
targetPort: 7555
selector:
name: data-service-1
-
apiVersion: v1
kind: Service
metadata:
name: data-service-2
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7556
targetPort: 7556
selector:
name: data-service-2
-
apiVersion: v1
kind: Route
metadata:
name: data-service-2
spec:
host: doc.data.test.com
port:
targetPort: 7556
to:
kind: Service
name: data-service-2
-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: entityreindexmap
spec:
rules:
- host: doc.data.test.com
http:
paths:
- path: /dbpath1
backend:
serviceName: data-service-1
servicePort: 7555
- path: /dbpath2
backend:
serviceName: data-service-2
servicePort: 7556
I couldn't get this working. I tried with doc.data.test.com/dbpath1 and doc.data.test.com/dbpath2. Any help is much appreciated.

Resources