Ingress controller is not routing based on path in openshift - spring-boot

I am trying to configure ingress controller in openshift for one of my requirement. I need to route requests to different pods based on path. Found Ingress controller is suitable for my requirement. I have two services created and a ingress which routes to one of these services based on path. Here is my configuration. My app is in spring boot.
apiVersion: v1beta3
kind: List
items:
-
apiVersion: v1
kind: Service
metadata:
name: data-service-1
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7555
targetPort: 7555
selector:
name: data-service-1
-
apiVersion: v1
kind: Service
metadata:
name: data-service-2
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7556
targetPort: 7556
selector:
name: data-service-2
-
apiVersion: v1
kind: Route
metadata:
name: data-service-2
spec:
host: doc.data.test.com
port:
targetPort: 7556
to:
kind: Service
name: data-service-2
-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: entityreindexmap
spec:
rules:
- host: doc.data.test.com
http:
paths:
- path: /dbpath1
backend:
serviceName: data-service-1
servicePort: 7555
- path: /dbpath2
backend:
serviceName: data-service-2
servicePort: 7556
I couldn't get this working. I tried with doc.data.test.com/dbpath1 and doc.data.test.com/dbpath2. Any help is much appreciated.

Related

(invalid_token_response) An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]

I'm creating Microservices that are deployed in docker-desktop Kubernetes cluster for development. I'm using Spring security with Auth0 and the pods are using Kubernetes Native Service Discovery coupled with Spring cloud gateway. When I log in using Auth0, it authenticates just fine but the token that is received appears to be empty based on the error given.
I'm new to Kubernetes and this error only seems to occur when running the application on the kubernetes cluster. If I use Eureka for local testing, Auth0 works completely fine. I've tried to do some research to see if the issue is the token unable to be retrieved in the kubernetes cluster and the only solution I've seem to be able to find is to implement istioctl within the cluster.
FRONTEND deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-interface-app
labels:
app: user-interface-app
spec:
replicas: 1
selector:
matchLabels:
app: user-interface-app
template:
metadata:
labels:
app: user-interface-app
spec:
containers:
- name: user-interface-app
image: imageName:tag
imagePullPolicy: Always
ports:
- containerPort: 8084
env:
- name: GATEWAY_URL
value: api-gateway-svc.default.svc.cluster.local
- name: ZIPKIN_SERVER_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: zipkin_service_url
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-api-key
- name: STRIPE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-public-key
- name: STRIPE_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
name: secret
key: stripe-webhook-secret
- name: AUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret
key: auth-client-id
- name: AUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secret
key: auth-client-secret
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-svc
spec:
selector:
app: user-interface-app
type: ClusterIP
ports:
- port: 8084
targetPort: 8084
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-lb
spec:
selector:
app: user-interface-app
type: LoadBalancer
ports:
- name: frontend
port: 8084
targetPort: 8084
protocol: TCP
- name: request
port: 80
targetPort: 8084
protocol: TCP
API-GATEWAY deployment.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-cm
data:
cart_service_url: http://cart-service-svc.default.svc.cluster.local
customer_profile_service_url: http://customer-profile-service-svc.default.svc.cluster.local
order_service_url: http://order-service-svc.default.svc.cluster.local
product_service_url: lb://product-service-svc.default.svc.cluster.local
zipkin_service_url: http://zipkin-svc.default.svc.cluster.local:9411
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-app
labels:
app: api-gateway-app
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway-app
template:
metadata:
labels:
app: api-gateway-app
spec:
containers:
- name: api-gateway-app
image: imageName:imageTag
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: PRODUCT_SERVICE_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: product_service_url
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-np
spec:
selector:
app: api-gateway-app
type: NodePort
ports:
- port: 80
targetPort: 8090
protocol: TCP
nodePort: 30499
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-svc
spec:
selector:
app: api-gateway-app
type: ClusterIP
ports:
- port: 80
targetPort: 8090
protocol: TCP

Is it possible to have a single ingress resource for all mulesoft applications in RTF in Self Managed Kubernetes on AWS?

Can we have a single ingress resource for deployment of all mulesoft applications in RTF in Self Managed Kubernetes on AWS?
Ingress template:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: temp1-svc
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: temp2-svc
port:
number: 80
temp1-svc:
apiVersion: v1
kind: Service
metadata:
name: temp1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp1-svc
temp2-svc:
apiVersion: v1
kind: Service
metadata:
name: temp2-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp2-svc
I am new to RTF, any changes to be done in Ingress resource or do we need to have separate ingress resource for each application? Any help would be appreciated.
Thanks
Generally managing different, ingress if good option.
You can also use the single ingress routing and forwarding traffic across the cluster.
Single ingress for all services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service
port:
number: 80
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service-2
port:
number: 80
the benefit of multiple ingress resources or separate ingress is that you can keep and configure the different annotations to your ingress.
In one, you want to enable CORS while in another you want to change proxy body head etc. So it's better to manage ingress for each microservice.

Istio Traffic Shifting not working in Internal communication

I deployed Istion on my local Kubernetes cluster running in my Mac. I created this VirtualService, DestinationRule and Gateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: code-gateway
namespace: code
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "gateway.code"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: codemaster
namespace: code
spec:
hosts:
- master.code
- codemaster
gateways:
- codemaster-gateway
- code-gateway
http:
- route:
- destination:
host: codemaster
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: codemaster-gateway
namespace: code
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "master.code"
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: codemaster
namespace: code
spec:
host: codemaster
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- apiVersion: "v1"
kind: "Service"
metadata:
labels:
app: "codemaster"
group: "code"
name: "codemaster"
namespace: "code"
spec:
ports:
- name: http-web
port: 80
targetPort: 80
selector:
app: "codemaster"
group: "code"
type: "ClusterIP"
- apiVersion: "apps/v1"
kind: "Deployment"
metadata:
labels:
app: "codemaster"
group: "code"
env: "production"
name: "codemaster"
namespace: "code"
spec:
replicas: 2
selector:
matchLabels:
app: "codemaster"
group: "code"
template:
metadata:
labels:
app: "codemaster"
version: "v1"
group: "code"
env: "production"
spec:
containers:
- env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: "SPRING_DATASOURCE_URL"
value: "jdbc:postgresql://host.docker.internal:5432/code_master"
- name: "SPRING_DATASOURCE_USERNAME"
value: "postgres"
- name: "SPRING_DATASOURCE_PASSWORD"
value: "postgres"
image: "kzone/code/codemaster:1.0.0"
imagePullPolicy: "IfNotPresent"
name: "codemaster"
ports:
- containerPort: 80
name: "http"
protocol: "TCP"
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "apps/v1"
kind: "Deployment"
metadata:
labels:
app: "codemaster"
group: "code"
env: "canary"
name: "codemaster-canary"
namespace: "code"
spec:
replicas: 1
selector:
matchLabels:
app: "codemaster"
group: "code"
template:
metadata:
labels:
app: "codemaster"
version: "v2"
group: "code"
env: "canary"
spec:
containers:
- env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: "SPRING_DATASOURCE_URL"
value: "jdbc:postgresql://host.docker.internal:5432/code_master"
- name: "SPRING_DATASOURCE_USERNAME"
value: "postgres"
- name: "SPRING_DATASOURCE_PASSWORD"
value: "postgres"
image: "kzone/code/codemaster:1.0.1"
imagePullPolicy: "IfNotPresent"
name: "codemaster"
ports:
- containerPort: 80
name: "http"
protocol: "TCP"
These are the services running in code namespace,
codemaster ClusterIP 10.103.151.80 <none> 80/TCP 18h
gateway ClusterIP 10.104.154.57 <none> 80/TCP 18h
I deployed 2 spring-boot microservices in ton k8s. One is a spring-boot gateway.
These are the pods running in code namespace,
codemaster-6cb7b8ddf5-mlpzn 2/2 Running 0 7h3m
codemaster-6cb7b8ddf5-sgzt8 2/2 Running 0 7h3m
codemaster-canary-756697d9c8-22qb2 2/2 Running 0 7h3m
gateway-5b5c8697f4-jpb4q 2/2 Running 0 7h3m
When I send a request to http://master.code/version(the gateway created for codemaster service) it always goes to the correct subset.
But when I send a request via spring-boot gateway (http://gateway.code/codemaster/version) request doesn't go to subset v1 only, requests go in round-robin to all the 3 pods. This is what I see in Kiali,
I want to apply traffic shifting between the gateway and other services.
Istio relies on the Host header of a request to apply the traffic rules. Since you are using spring boot gateway to make the request ribbon hits the pod IP directly instead of hitting the service. So to avoid it provide static server list to the
route /version as http://master.code.cluster.local
in your spring boot gateway config -> to avoid ribbon dynamic endpoint discovery. This should fix the problem.
After doing some search I found that there is no CNI in Docker for Mac k8s. Because of that traffic shifting doesn't not work on Docker for Mac K8s

Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure

I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
tier: backend
track: stable
spec:
containers:
- name: backend
image: "gcr.io/finacials/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
spec:
ports:
- port: 8081
#targetPort: 8081
#protocol: TCP
name: http
selector:
app: revenue-serv
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /revenue/.*
backend:
serviceName: revenue-serv
servicePort: 8081
Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked.
deployment & service:
#########################################################################################
# This is for deployment - Service & Deployment in Kubernetes ################
# Author: Arindam Banerjee ################
#########################################################################################
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue-serv
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
version: v1
spec:
containers:
- name: revenue-serv
image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
namespace: dev
spec:
ports:
- port: 8081
name: http
selector:
app: revenue-serv
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: worldcup-serv-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: revenue-serv-virtualservice
namespace: dev
spec:
hosts:
- "*"
gateways:
- revenue-serv-gateway
http:
- route:
- destination:
host: revenue-serv

Deploy Rest + gRPC server deploy to k8s with ingress

I have used a sample gRPC HelloWorld application https://github.com/grpc/grpc-go/tree/master/examples/helloworld. This example is running smoothly in local system.
I want to deploy it to kubernetes with use of Ingress.
Below are my config files.
service.yaml - as NodePort
apiVersion: v1
kind: Service
metadata:
name: grpc-scratch
labels:
run: grpc-scratch
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50051
protocol: TCP
targetPort: 50051
selector:
run: example-grpc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc
servicePort: 50051
I am unable to make gRPC request to the server with url xyz.com/grpc. Getting the error
{
"error": "14 UNAVAILABLE: Name resolution failure"
}
If I make request to xyz.com the error is
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Any help would be appreciated.
A backend of the ingress object is a combination of service and port names
In your case you have serviceName: grpc as a backend while your service's actual name is name: grpc-scratch
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc-scratch
servicePort: grpc

Resources