(invalid_token_response) An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body] - spring

I'm creating Microservices that are deployed in docker-desktop Kubernetes cluster for development. I'm using Spring security with Auth0 and the pods are using Kubernetes Native Service Discovery coupled with Spring cloud gateway. When I log in using Auth0, it authenticates just fine but the token that is received appears to be empty based on the error given.
I'm new to Kubernetes and this error only seems to occur when running the application on the kubernetes cluster. If I use Eureka for local testing, Auth0 works completely fine. I've tried to do some research to see if the issue is the token unable to be retrieved in the kubernetes cluster and the only solution I've seem to be able to find is to implement istioctl within the cluster.
FRONTEND deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-interface-app
labels:
app: user-interface-app
spec:
replicas: 1
selector:
matchLabels:
app: user-interface-app
template:
metadata:
labels:
app: user-interface-app
spec:
containers:
- name: user-interface-app
image: imageName:tag
imagePullPolicy: Always
ports:
- containerPort: 8084
env:
- name: GATEWAY_URL
value: api-gateway-svc.default.svc.cluster.local
- name: ZIPKIN_SERVER_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: zipkin_service_url
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-api-key
- name: STRIPE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-public-key
- name: STRIPE_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
name: secret
key: stripe-webhook-secret
- name: AUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret
key: auth-client-id
- name: AUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secret
key: auth-client-secret
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-svc
spec:
selector:
app: user-interface-app
type: ClusterIP
ports:
- port: 8084
targetPort: 8084
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-lb
spec:
selector:
app: user-interface-app
type: LoadBalancer
ports:
- name: frontend
port: 8084
targetPort: 8084
protocol: TCP
- name: request
port: 80
targetPort: 8084
protocol: TCP
API-GATEWAY deployment.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-cm
data:
cart_service_url: http://cart-service-svc.default.svc.cluster.local
customer_profile_service_url: http://customer-profile-service-svc.default.svc.cluster.local
order_service_url: http://order-service-svc.default.svc.cluster.local
product_service_url: lb://product-service-svc.default.svc.cluster.local
zipkin_service_url: http://zipkin-svc.default.svc.cluster.local:9411
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-app
labels:
app: api-gateway-app
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway-app
template:
metadata:
labels:
app: api-gateway-app
spec:
containers:
- name: api-gateway-app
image: imageName:imageTag
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: PRODUCT_SERVICE_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: product_service_url
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-np
spec:
selector:
app: api-gateway-app
type: NodePort
ports:
- port: 80
targetPort: 8090
protocol: TCP
nodePort: 30499
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-svc
spec:
selector:
app: api-gateway-app
type: ClusterIP
ports:
- port: 80
targetPort: 8090
protocol: TCP

Related

Istio virtual service subset not able to send request to specific pods

I have following scenario
FastAPI (API Gateway)
Users (gRPC service)
below is the deployment yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: users
labels:
app: users
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: users
version: v1
template:
metadata:
labels:
app: users
version: v1
spec:
containers:
- image: users:v0.0.1
imagePullPolicy: Always
name: svc
ports:
- containerPort: 9090
---
kind: Service
apiVersion: v1
metadata:
name: users
labels:
app: users
spec:
selector:
app: users
ports:
- name: grpc
protocol: TCP
port: 9090
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi
labels:
app: fastapi
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
version: v1
template:
metadata:
labels:
app: fastapi
version: v1
spec:
containers:
- image: fastapi:latest
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
env:
- name: USERS_SVC
value: 'users:9090'
---
kind: Service
apiVersion: v1
metadata:
name: fastapi
labels:
app: fastapi
spec:
selector:
app: fastapi
ports:
- port: 8080
name: http
After this I tried to test virtual service and route to users service (version: v2) when https header is passed. below is the codes for virtual service and destination rules
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: users-service-destination-rule
spec:
host: users
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: users-virtual-service
spec:
hosts:
- users
http:
- match:
- headers:
x-user-testing:
exact: tester
route:
- destination:
host: users
subset: v2
- route:
- destination:
host: users
subset: v1
Below is the deployment for user service (version v2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: users-v2
labels:
app: users
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: users
version: v2
template:
metadata:
labels:
app: users
version: v2
spec:
containers:
- image: users:v0.0.1
imagePullPolicy: Always
name: svc
ports:
- containerPort: 9090
When i passed header value, the request always goes to version v1.
curl -H "x-user-testing: tester" localhost/users
Can anyone help me please.
Thanks in advance

Validating Error on deployment in Kubernetes

I have tried to deploy the producer-service app with MySQL database in the Kubernetes cluster. When i try to deploy producer app then the following validation error has thrown.
error: error validating "producer-deployment.yml": error validating data: apiVersion not set; if you choose to ignore these errors, turn validation off with --validate=false
producer-deployment.yml
apiVerion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
-nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
i have tried to find the error or typo within the config file but still, couldn't. What is wrong with the producer-deployment.yml file
Multiple issues:
It would be apiVersion: v1 not apiVerion: v1 in the Service
wrong Spec.ports formation of Service. As nodePort, port, targetPort and protocol are under the ports as a list but your did wrong formation.
your service yaml should be like below:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
So your overall yaml should be:
apiVersion: v1
kind: Service
metadata:
name: producer-app
labels:
name: producer-app
spec:
ports:
- nodePort: 30163
port: 9090
targetPort: 9090
protocol: TCP
selector:
app: producer-app
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: producer-app
spec:
selector:
matchLabels:
app: producer-app
replicas: 3
template:
metadata:
labels:
app: producer-app
spec:
containers:
- name: producer
image: producer:1.0
ports:
- containerPort: 9090
env:
- name: DB_HOST
valueFrom:
configMapKeyRef:
name: db-config
key: host
- name: DB_NAME
valueFrom:
configMapKeyRef:
name: db-config
key: name
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-user
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-user
key: password
Please change the first line in producer-deployment.yml. Letter s is missing.
From
apiVerion: v1
To
apiVersion: v1
There is a typo in the first line: "apiVerion" should be "apiVersion".
Your first error(there are more than 1) just point you to the place where you should start your investigation from..
error validating data: apiVersion not set;
As you know, each object in kubernetes has its own apiVersion.
Check Understanding Kubernetes Objects, especially Required Fields part:
In the .yaml file for the Kubernetes object you want to create, you'll
need to set values for the following fields:
apiVersion - Which version of the Kubernetes API you're using to
create this object
kind - What kind of object you want to create
metadata - Data that helps uniquely identify the object, including a
name string, UID, and optional namespace
spec - What state you desire
for the object The precise format of the object spec is different for
every Kubernetes object, and contains nested fields specific to that
object.
The Kubernetes API Reference can help you find the spec format
for all of the objects you can create using Kubernetes.
You can check Latest 1.20 API here
These values are mandatory and you wont be able to create object without them. So please, next time read more carefully errors you receive.

Kubernetes Traefik v2.3.0 - Web UI 404 Not Found after removing --api.insecure

I'm running Traefik v2.3.0 in a AKS (Azure Kubernetes Service) Cluster and i'm currently trying to setup a Basic Authentication on my Traefik UI.
The dashboard (Traefik UI) works fine without any authentication but i'm getting the server not found page when I try to access with a Basic Authentication.
Here is my configuration.
IngressRoute, Middleware for BasicAuth, Secret and Service :
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`traefik-ui.domain.com`) && PathPrefix(`/`) || PathPrefix(`/dashboard`)
services:
- name: traefik-ui
port: 80
middlewares:
- name: traefik-ui-auth
namespace: ingress-basic
tls:
secretName: traefik-ui-cert
---
apiVersion: v1
kind: Secret
metadata:
name: traefik-secret
namespace: ingress-basic
data:
users: |2
dWlhZG06JGFwcjEkanJMZGtEb1okaS9BckJmZzFMVkNIMW80bGtKWFN6LwoK
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: traefik-ui-auth
namespace: ingress-basic
spec:
basicAuth:
secret: traefik-secret
---
apiVersion: v1
kind: Service
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: traefik-ingress-lb
sessionAffinity: None
type: ClusterIP
DaemonSet and Service:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress
namespace: ingress-basic
spec:
selector:
matchLabels:
app: traefik-ingress-lb
template:
metadata:
labels:
app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- small
containers:
- args:
- --api.dashboard=true
- --accesslog
- --accesslog.fields.defaultmode=keep
- --accesslog.fields.headers.defaultmode=keep
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.metrics.address=:8082
- --providers.kubernetesIngress.ingressClass=traefik-cert-manager
- --certificatesresolvers.default.acme.email=info#domain.com
- --certificatesresolvers.default.acme.storage=acme.json
- --certificatesresolvers.default.acme.tlschallenge
- --providers.kubernetescrd
- --ping=true
- --pilot.token=xxxxxx-xxxx-xxxx-xxxxx-xxxxx-xx
- --metrics.statsd=true
- --metrics.statsd.address=localhost:8125
- --metrics.statsd.addEntryPointsLabels=true
- --metrics.statsd.addServicesLabels=true
image: traefik:v2.3.0
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
ports:
- containerPort: 80
name: web
protocol: TCP
- containerPort: 8080
name: admin
protocol: TCP
- containerPort: 443
name: websecure
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /acme/acme.json
name: acme
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: traefik-ingress
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoSchedule
key: size
operator: Equal
value: small
volumes:
- hostPath:
path: /srv/configs/acme.json
type: ""
name: acme
With this configuration :
kubectl exec -it -n ingress-basic traefik-ingress-2m88q -- curl http://localhost:8080/dashboard/
404 page not found
When removing the Middleware and adding "--api.insecure" in the DaemonSet config :
kubectl exec -it -n ingress-basic traefik-ingress-1hf4q -- curl http://localhost:8080/dashboard/
<!DOCTYPE html><html><head><title>Traefik</title><meta charset=utf-8><meta name=description content="Traefik UI"><meta name=format-detection content="telephone=no"><meta name=msapplication-tap-highlight content=no><meta name=viewport content="user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1,width=device-width"><link rel=icon type=image/png href=statics/app-logo-128x128.png><link rel=icon type=image/png sizes=16x16 href=statics/icons/favicon-16x16.png><link rel=icon[...]</body></html>
Please let me know what I am doing wrong here? Is there any other way of doing it ?
Regards,
Here's another take on the IngressRoute, adapted to your environment.
I think 99% of the issue is actual route matching, especially if you say --api.insecure works.
Also as a rule of a thumb, logging & access log would help a lot in the DaemonSet definition.
- --log
- --log.level=DEBUG
- --accesslog
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-ui
namespace: ingress-basic
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik-ui.domain.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: traefik-basic-auth
tls:
secretName: traefik-ui-cert

Istio Traffic Shifting not working in Internal communication

I deployed Istion on my local Kubernetes cluster running in my Mac. I created this VirtualService, DestinationRule and Gateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: code-gateway
namespace: code
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "gateway.code"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: codemaster
namespace: code
spec:
hosts:
- master.code
- codemaster
gateways:
- codemaster-gateway
- code-gateway
http:
- route:
- destination:
host: codemaster
subset: v1
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: codemaster-gateway
namespace: code
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "master.code"
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: codemaster
namespace: code
spec:
host: codemaster
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- apiVersion: "v1"
kind: "Service"
metadata:
labels:
app: "codemaster"
group: "code"
name: "codemaster"
namespace: "code"
spec:
ports:
- name: http-web
port: 80
targetPort: 80
selector:
app: "codemaster"
group: "code"
type: "ClusterIP"
- apiVersion: "apps/v1"
kind: "Deployment"
metadata:
labels:
app: "codemaster"
group: "code"
env: "production"
name: "codemaster"
namespace: "code"
spec:
replicas: 2
selector:
matchLabels:
app: "codemaster"
group: "code"
template:
metadata:
labels:
app: "codemaster"
version: "v1"
group: "code"
env: "production"
spec:
containers:
- env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: "SPRING_DATASOURCE_URL"
value: "jdbc:postgresql://host.docker.internal:5432/code_master"
- name: "SPRING_DATASOURCE_USERNAME"
value: "postgres"
- name: "SPRING_DATASOURCE_PASSWORD"
value: "postgres"
image: "kzone/code/codemaster:1.0.0"
imagePullPolicy: "IfNotPresent"
name: "codemaster"
ports:
- containerPort: 80
name: "http"
protocol: "TCP"
apiVersion: "v1"
kind: "List"
items:
- apiVersion: "apps/v1"
kind: "Deployment"
metadata:
labels:
app: "codemaster"
group: "code"
env: "canary"
name: "codemaster-canary"
namespace: "code"
spec:
replicas: 1
selector:
matchLabels:
app: "codemaster"
group: "code"
template:
metadata:
labels:
app: "codemaster"
version: "v2"
group: "code"
env: "canary"
spec:
containers:
- env:
- name: "KUBERNETES_NAMESPACE"
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: "SPRING_DATASOURCE_URL"
value: "jdbc:postgresql://host.docker.internal:5432/code_master"
- name: "SPRING_DATASOURCE_USERNAME"
value: "postgres"
- name: "SPRING_DATASOURCE_PASSWORD"
value: "postgres"
image: "kzone/code/codemaster:1.0.1"
imagePullPolicy: "IfNotPresent"
name: "codemaster"
ports:
- containerPort: 80
name: "http"
protocol: "TCP"
These are the services running in code namespace,
codemaster ClusterIP 10.103.151.80 <none> 80/TCP 18h
gateway ClusterIP 10.104.154.57 <none> 80/TCP 18h
I deployed 2 spring-boot microservices in ton k8s. One is a spring-boot gateway.
These are the pods running in code namespace,
codemaster-6cb7b8ddf5-mlpzn 2/2 Running 0 7h3m
codemaster-6cb7b8ddf5-sgzt8 2/2 Running 0 7h3m
codemaster-canary-756697d9c8-22qb2 2/2 Running 0 7h3m
gateway-5b5c8697f4-jpb4q 2/2 Running 0 7h3m
When I send a request to http://master.code/version(the gateway created for codemaster service) it always goes to the correct subset.
But when I send a request via spring-boot gateway (http://gateway.code/codemaster/version) request doesn't go to subset v1 only, requests go in round-robin to all the 3 pods. This is what I see in Kiali,
I want to apply traffic shifting between the gateway and other services.
Istio relies on the Host header of a request to apply the traffic rules. Since you are using spring boot gateway to make the request ribbon hits the pod IP directly instead of hitting the service. So to avoid it provide static server list to the
route /version as http://master.code.cluster.local
in your spring boot gateway config -> to avoid ribbon dynamic endpoint discovery. This should fix the problem.
After doing some search I found that there is no CNI in Docker for Mac k8s. Because of that traffic shifting doesn't not work on Docker for Mac K8s

Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure

I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
tier: backend
track: stable
spec:
containers:
- name: backend
image: "gcr.io/finacials/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
spec:
ports:
- port: 8081
#targetPort: 8081
#protocol: TCP
name: http
selector:
app: revenue-serv
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /revenue/.*
backend:
serviceName: revenue-serv
servicePort: 8081
Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked.
deployment & service:
#########################################################################################
# This is for deployment - Service & Deployment in Kubernetes ################
# Author: Arindam Banerjee ################
#########################################################################################
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue-serv
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
version: v1
spec:
containers:
- name: revenue-serv
image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
namespace: dev
spec:
ports:
- port: 8081
name: http
selector:
app: revenue-serv
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: worldcup-serv-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: revenue-serv-virtualservice
namespace: dev
spec:
hosts:
- "*"
gateways:
- revenue-serv-gateway
http:
- route:
- destination:
host: revenue-serv

Resources