Cannot access the Kafka Broker from inside the Kubernetes cluster - spring-boot

I have a kafka broker and a spring boot application in my Kubernetes cluster. They are running on their own containers.
The spring boot application is a message producer. It needs to access the kafkabroker to send the messages. But it couldn't access the Kafka broker by providing the Kafka's servicename:port in the producers bootstrap.servers
Any help would be greatly appreciated.
Zookeper and KafkaBroker configuration in yaml:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
namespace: mynamespace-k8s
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
namespace: mynamespace-k8s
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-service
namespace: mynamespace-k8s
spec:
ports:
- port: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: mynamespace-k8s
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service.mynamespace-k8s
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service.mynamespace-k8s:2181
- name: KAFKA_BROKER_ID
value: "1"
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
My springboot application conf in yaml:
apiVersion: v1
kind: Service
metadata:
name: locationmanager-service
namespace: mynamespace-k8s
labels:
app: locationmanager
spec:
selector:
app: locationmanager
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32588
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: locationmanager-deployment
namespace: mynamespace-k8s
labels:
app: locationmanager
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: locationmanager
template:
metadata:
labels:
app: locationmanager
spec:
containers:
- name: locationmanager
image: aef/locmanager:latest
ports:
- containerPort: 8081
resources:
limits:
memory: "1Gi"
cpu: "1000m"
requests:
memory: "256Mi"
cpu: "500m"
env:
- name: CONFIG_KAFKA_BOOTSTRAP_SERVERS
value: kafka-service.mynamespace-k8s:9092
Spring boot's bootstrap.server in application.properties:
spring.kafka.producer.bootstrap-servers= ${CONFIG_KAFKA_BOOTSTRAP_SERVERS}
When springboot application tries to create a topic, I receive the exception below:
2022-07-07 10:51:50,078 ERROR o.s.k.c.KafkaAdmin [main] Could not configure topics
org.springframework.kafka.KafkaException: Timed out waiting to get existing topics; nested exception is java.util.concurrent.TimeoutException
at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$5(KafkaAdmin.java:275) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
at org.springframework.kafka.core.KafkaAdmin.checkPartitions(KafkaAdmin.java:254) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.addOrModifyTopicsIfNeeded(KafkaAdmin.java:240) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.initialize(KafkaAdmin.java:178) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.afterSingletonsInstantiated(KafkaAdmin.java:145) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:972) ~[spring-beans-5.3.18.jar!/:5.3.18]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.18.jar!/:5.3.18]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.18.jar!/:5.3.18]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:740) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:415) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1312) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1301) ~[spring-boot-2.6.6.jar!/:2.6.6]
at com.trendyol.locationmanager.LocationManagerApplication.main(LocationManagerApplication.java:24) ~[classes!/:0.0.1-SNAPSHOT]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[locationmanager.jar:0.0.1-SNAPSHOT]
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180) ~[kafka-clients-3.0.1.jar!/:?]
at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$5(KafkaAdmin.java:257) ~[spring-kafka-2.8.4.jar!/:2.8.4]
... 23 more

There are several issues with your configuration, which are leading to the services not working correctly:
The zookeeper-service is of type NodePort. Therefore, there is no need to specify the port parameter. The traffic will be received on nodePort: 30181 and forwarded to the targetPort: 2181.
The kafka-broker service is not specifying any targetPort parameter. The service is of ClusterIP type by default which requires this parameter. You are receiving traffic on port: 9092 but you're not forwarding it to any pods. You need to add targetPort: 9092 which is the value of your KAFKA_PORT environment variable. This will correctly forward the incoming traffic to the right kafka-service pods.
The springboot application locationmanager is of type Loadbalancer therefore, there is no need to specify the nodePort parameter. Remove it. Additionally, the service is receiving traffic on port: 8080 and forwards it to the pods on targetPort: 8080. This is incorrect, since your application deployment exposes containerPort: 8081 instead of 8080.
Fixing these configuration issues will fix your problem.

Related

(invalid_token_response) An error occurred while attempting to retrieve the OAuth 2.0 Access Token Response: 401 Unauthorized: [no body]

I'm creating Microservices that are deployed in docker-desktop Kubernetes cluster for development. I'm using Spring security with Auth0 and the pods are using Kubernetes Native Service Discovery coupled with Spring cloud gateway. When I log in using Auth0, it authenticates just fine but the token that is received appears to be empty based on the error given.
I'm new to Kubernetes and this error only seems to occur when running the application on the kubernetes cluster. If I use Eureka for local testing, Auth0 works completely fine. I've tried to do some research to see if the issue is the token unable to be retrieved in the kubernetes cluster and the only solution I've seem to be able to find is to implement istioctl within the cluster.
FRONTEND deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-interface-app
labels:
app: user-interface-app
spec:
replicas: 1
selector:
matchLabels:
app: user-interface-app
template:
metadata:
labels:
app: user-interface-app
spec:
containers:
- name: user-interface-app
image: imageName:tag
imagePullPolicy: Always
ports:
- containerPort: 8084
env:
- name: GATEWAY_URL
value: api-gateway-svc.default.svc.cluster.local
- name: ZIPKIN_SERVER_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: zipkin_service_url
- name: STRIPE_API_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-api-key
- name: STRIPE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: secret
key: stripe-public-key
- name: STRIPE_WEBHOOK_SECRET
valueFrom:
secretKeyRef:
name: secret
key: stripe-webhook-secret
- name: AUTH_CLIENT_ID
valueFrom:
secretKeyRef:
name: secret
key: auth-client-id
- name: AUTH_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: secret
key: auth-client-secret
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-svc
spec:
selector:
app: user-interface-app
type: ClusterIP
ports:
- port: 8084
targetPort: 8084
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: user-interface-lb
spec:
selector:
app: user-interface-app
type: LoadBalancer
ports:
- name: frontend
port: 8084
targetPort: 8084
protocol: TCP
- name: request
port: 80
targetPort: 8084
protocol: TCP
API-GATEWAY deployment.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: gateway-cm
data:
cart_service_url: http://cart-service-svc.default.svc.cluster.local
customer_profile_service_url: http://customer-profile-service-svc.default.svc.cluster.local
order_service_url: http://order-service-svc.default.svc.cluster.local
product_service_url: lb://product-service-svc.default.svc.cluster.local
zipkin_service_url: http://zipkin-svc.default.svc.cluster.local:9411
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-gateway-app
labels:
app: api-gateway-app
spec:
replicas: 1
selector:
matchLabels:
app: api-gateway-app
template:
metadata:
labels:
app: api-gateway-app
spec:
containers:
- name: api-gateway-app
image: imageName:imageTag
imagePullPolicy: Always
ports:
- containerPort: 8090
env:
- name: PRODUCT_SERVICE_URL
valueFrom:
configMapKeyRef:
name: gateway-cm
key: product_service_url
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-np
spec:
selector:
app: api-gateway-app
type: NodePort
ports:
- port: 80
targetPort: 8090
protocol: TCP
nodePort: 30499
---
apiVersion: v1
kind: Service
metadata:
name: api-gateway-svc
spec:
selector:
app: api-gateway-app
type: ClusterIP
ports:
- port: 80
targetPort: 8090
protocol: TCP

Springboot service deployed in AKS not working with ingress

I have a simple Springboot service, which works well when I configure Service of type:LoadBalancer. But when I use service of type:ClusterIP and introduce ingress, it does not work
For that matter, I am unable to get Ingress working for any of my deployments in Azure/AKS.
Please suggest what am I missing
Spring code
#RestController
#RequestMapping("/demo")
public class MyController {
#GetMapping("/welcome")
public String welcome() {
return "Hello Welcome";
}
}
LoadBalancer - working code
spring-microservice-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-microservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: spring-microservice
template:
metadata:
labels:
app: spring-microservice
spec:
containers:
- name: spring-microservice
image: babaacr.azurecr.io/welcome-service:1.0
resources:
requests:
memory: '256Mi'
cpu: '500m'
limits:
memory: '512Mi'
cpu: '1'
ports:
- name: http
containerPort: 8080
spring-microservice-service.yaml
apiVersion: v1
kind: Service
metadata:
name: spring-microservice
namespace: default
labels:
app: spring-microservice
spec:
selector:
app: spring-microservice
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
when I access the url http://20.XXX.XX.XX:8080/demo/welcome
message is printed
ingress based which is not working
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-microservice-x
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: spring-microservice-x
template:
metadata:
labels:
app: spring-microservice-x
spec:
containers:
- name: spring-microservice-x
image: babaacr.azurecr.io/welcome-service:1.0
resources:
requests:
memory: '256Mi'
cpu: '500m'
limits:
memory: '512Mi'
cpu: '1'
ports:
- name: http
containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: spring-microservice-x
namespace: default
labels:
app: spring-microservice-x
spec:
selector:
app: spring-microservice-x
type: ClusterIP
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: spring-microservice-ingress
spec:
defaultBackend:
service:
name: spring-microservice-x
port:
number: 8080
when I access the url http://20.YYY.YY.YY:8080/demo/welcome
the page times-out
ingress controller is configured using below
helm install ingress-nginx ingress-nginx/ingress-nginx --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz

jaeger configuration in spring boot application when both of them are deployed on kubernetes

So, I am trying to trace logs of my spring boot application with jaeger so what are the steps that should be perform if my application and jaeger is deploy on kubernetes.
I have successfully deployed jaeger and spring boot application
now how will I configure jaeger in my service.
My services are not visible in the jaegar console.
I have added the following configuration to the yml:
opentracing.jaeger.udp-sender.host=localhost
opentracing.jaeger.udp-sender.port=6831
apiVersion: v1
kind: List
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jaeger
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "16686"
spec:
containers:
- env:
- name: COLLECTOR_ZIPKIN_HTTP_PORT
value: "9411"
image: jaegertracing/all-in-one
name: jaeger
ports:
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
- containerPort: 16686
protocol: TCP
- containerPort: 9411
protocol: TCP
readinessProbe:
httpGet:
path: "/"
port: 14269
initialDelaySeconds: 5
- apiVersion: v1
kind: Service
metadata:
name: jaeger-query
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: query
spec:
ports:
- name: query-http
port: 80
protocol: TCP
targetPort: 16686
selector:
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
type: LoadBalancer
- apiVersion: v1
kind: Service
metadata:
name: jaeger-collector
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: collector
spec:
ports:
- name: jaeger-collector-tchannel
port: 14267
protocol: TCP
targetPort: 14267
- name: jaeger-collector-http
port: 14268
protocol: TCP
targetPort: 14268
- name: jaeger-collector-zipkin
port: 9411
protocol: TCP
targetPort: 9411
selector:
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
type: ClusterIP
- apiVersion: v1
kind: Service
metadata:
name: jaeger-agent
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: agent
spec:
ports:
- name: agent-zipkin-thrift
port: 5775
protocol: UDP
targetPort: 5775
- name: agent-compact
port: 6831
protocol: UDP
targetPort: 6831
- name: agent-binary
port: 6832
protocol: UDP
targetPort: 6832
- name: agent-configs
port: 5778
protocol: TCP
targetPort: 5778
clusterIP: None
selector:
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
- apiVersion: v1
kind: Service
metadata:
name: zipkin
labels:
app: jaeger
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: zipkin
spec:
ports:
- name: jaeger-collector-zipkin
port: 9411
protocol: TCP
targetPort: 9411
clusterIP: None
selector:
app.kubernetes.io/name: jaeger
app.kubernetes.io/component: all-in-one
Output og kubectl get service jaeger-query
Name: jaeger-query
Namespace: default
Labels: app=jaeger
app.kubernetes.io/component=query
app.kubernetes.io/name=jaeger
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"jaeger",
"app.kubernetes.io/component":"query","app.kuber...
Selector: app.kubernetes.io/component=all-in-one,app.kubernetes.io/name=jaeger
Type: LoadBalancer
IP: 10.24.14.223
LoadBalancer Ingress: 35.222.40.241
Port: query-http 80/TCP
TargetPort: 16686/TCP
NodePort: query-http 30290/TCP
Endpoints: 10.20.2.7:16686
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So the solution that works for me is -
I have made the following changes in my application.properties file of application
opentracing.jaeger.udp-sender.host=http://<load_balancer_ip_of_jaeger_service>:<expose_port>(example ":80")
opentracing.jaeger.http-sender.url=http://<cluster_ip_or_load_balancer_ip_of jaeger_collector>:<expose_port>/api/traces(example of expose port ":1468")
You add opentracing-spring-jaeger-starter library into the project which simply contains the code needed to provide a Jaeger implementation of the OpenTracing's io.opentracing.Tracer interface.
Since you have the jaeger deployed in Kubernetes and exposed it via a loadbalancer service you can use the loadbalancer IP and port to connect to it from outside the Kubernetes cluster.
You should config:
opentracing.jaeger.udp-sender.host=jaeger-agent
opentracing.jaeger.udp-sender.port=6831

How to connect with headless Mongo Service in kubernetes

apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
name: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
name: mongo
# environment: test
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: mongo-pv-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
containers:
- name: mongo
image: mongo:4.0.12-xenial
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: mongo-pv-storage
mountPath: /data/db
I have used the above yaml. Mongo Db is running fine checked using kubectl exec command. Below yaml used to deploy spring boot application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: imageprocessor-app-backend
labels:
app: imageprocessor-app-backend
spec:
# modify replicas according to your case
selector:
matchLabels:
tier: imageprocessor-app-backend
template:
metadata:
labels:
tier: imageprocessor-app-backend
spec:
containers:
- name: imageprocessor-app-backend
image: imageprocessor-app-backend:v1
ports:
- containerPort: 8099
env:
- name: spring.data.mongodb.host
value: mongo-0.mongo
- name: spring.data.mongodb.port
value: "27017"
- name: spring.data.mongodb.database
value: testdb
---
apiVersion: v1
kind: Service
metadata:
name: imageprocessor-app-backend
spec:
type: NodePort
ports:
- port: 8099
nodePort: 31471
selector:
tier: imageprocessor-app-backend
The exception I am getting is
2019-09-24 12:27:04.902 INFO 1 --- [o-0.mongo:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server mongo-0.mongo:27017
com.mongodb.MongoSocketException: mongo-0.mongo: Try again
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.8.2.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]
Caused by: java.net.UnknownHostException: mongo-0.mongo: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_212]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_212]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_212]
at java.net.InetAddress.getByName(InetAddress.java:1077) ~[na:1.8.0_212]
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:186) ~[mongodb-driver-core-3.8.2.jar:na]
... 5 common frames omitted
How to connect with the headless mongo service with my application. I tried using - name: spring.data.mongodb.host
value: mongo-0.mongo // and value: mongo
You need to use the name of the service as hostname. In your example, it's mongo. I deployed mongo with your above YAML and I could successfully connect to it from another pod in the same namespace.
If you're running imageprocessor-app-backend in a different namespace then mongo, then you have to add the namespace where mongo is running to the hostname: mongo.<namespace>, e.g. mongo.mongo.

Istio - GKE - gRPC config stream closed; upstream connect error or disconnect/reset before headers. reset reason: connection failure

I am trying to my spring boot micro service in GKE Cluster with istio 1.1.5 latest version as of now. It throws error and pod never spins up. If I run it as a separate service in Kubernetes engine it works perfectly but with isito, it does not work. The purpose for using istio is to host multiple microservices and to use the feature istio provides. Here is my yaml file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
tier: backend
track: stable
spec:
containers:
- name: backend
image: "gcr.io/finacials/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 15
timeoutSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
spec:
ports:
- port: 8081
#targetPort: 8081
#protocol: TCP
name: http
selector:
app: revenue-serv
tier: backend
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
rules:
- http:
paths:
- path: /revenue/.*
backend:
serviceName: revenue-serv
servicePort: 8081
Thanks for your valuable feedback.
I have found the issue. I removed readynessProbe and livenessProbe and created ingressgateway and virtual service. It worked.
deployment & service:
#########################################################################################
# This is for deployment - Service & Deployment in Kubernetes ################
# Author: Arindam Banerjee ################
#########################################################################################
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: revenue-serv
namespace: dev
spec:
replicas: 1
template:
metadata:
labels:
app: revenue-serv
version: v1
spec:
containers:
- name: revenue-serv
image: "eu.gcr.io/rcup-mza-dev/revenue-serv:latest"
imagePullPolicy: Always
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: revenue-serv
namespace: dev
spec:
ports:
- port: 8081
name: http
selector:
app: revenue-serv
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: worldcup-serv-gateway
namespace: dev
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: revenue-serv-virtualservice
namespace: dev
spec:
hosts:
- "*"
gateways:
- revenue-serv-gateway
http:
- route:
- destination:
host: revenue-serv

Resources