Spring Webflux and Proxy Protocol on Kubernetes - spring

I have a Spring Webflux application that is deployed on Digital Ocean Kubernetes with Proxy Protocol enabled because the app needs to get access to the client IP.
Here are Kubernetes resource descriptions:
---
apiVersion: v1
kind: Service
metadata:
name: myapp-loadbalancer
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ipregistry-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp/myapp:1553679009241
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
imagePullSecrets:
- name: regcred
The deployment works great. Unfortunately, when I perform a GET request to any app endpoint I get an HTTP 400 response.
By looking at the logs I noticed that Spring Webflux and the underlying Netty library that is used does not parse the request properly when proxy protocol is enabled.
Please note that if the proxy protocol is disabled by setting service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol to false then everything works but I cannot get the real client IP.
Here is the error I get in the logs when debugging is enabled:
2019-03-27 09:33:45.916 DEBUG 1 --- [or-http-epoll-1] r.n.http.server.HttpServerOperations : [id: 0x3c2ee440, L:/10.244.2.230:8080 - R:/10.131.95.63:27204] New http connection, requesting read 2019-03-27 09:33:45.916 DEBUG 1 --- [or-http-epoll-1] reactor.netty.channel.BootstrapHandlers : [id: 0x3c2ee440, L:/10.244.2.230:8080 - R:/10.131.95.63:27204] Initialized pipeline DefaultChannelPipeline{(BootstrapHandlers$BootstrapInitializerHandler#0
= reactor.netty.channel.BootstrapHandlers$BootstrapInitializerHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpServerCodec), (reactor.left.httpTrafficHandler = reactor.netty.http.server.HttpTrafficHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)} 2019-03-27 09:33:45.917 DEBUG 1 --- [or-http-epoll-1] r.n.http.server.HttpServerOperations : [id: 0x3c2ee440, L:/10.244.2.230:8080 - R:/10.131.95.63:27204] Decoding failed: DefaultFullHttpRequest(decodeResult: failure(java.lang.IllegalArgumentException: invalid version format:
83.252.136.179 10.16.5.223 3379 80), version: HTTP/1.0, content: UnpooledByteBufAllocator$InstrumentedUnpooledUnsafeHeapByteBuf(ridx: 0, widx: 0, cap: 0)) GET /bad-request HTTP/1.0 :
java.lang.IllegalArgumentException: invalid version format:
83.252.136.179 10.16.5.223 3379 80 at io.netty.handler.codec.http.HttpVersion.<init>(HttpVersion.java:121) ~[netty-codec-http-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.http.HttpVersion.valueOf(HttpVersion.java:76) ~[netty-codec-http-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.http.HttpRequestDecoder.createMessage(HttpRequestDecoder.java:87) ~[netty-codec-http-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.http.HttpObjectDecoder.decode(HttpObjectDecoder.java:219) ~[netty-codec-http-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.http.HttpServerCodec$HttpServerRequestDecoder.decode(HttpServerCodec.java:101) ~[netty-codec-http-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278) ~[netty-codec-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930) ~[netty-transport-4.1.33.Final.jar:4.1.33.Final] at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:799) ~[netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:4.1.33.Final] at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:427) ~[netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:4.1.33.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:328) ~[netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:4.1.33.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905) ~[netty-common-4.1.33.Final.jar:4.1.33.Final] at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
I tried to use the property server.use-forward-headers=true in my application properties but it has no effect.
Is there a way to support Proxy Protocol with Spring Webflux? How to proceed? An example would help a lot.

Related

Cannot access the Kafka Broker from inside the Kubernetes cluster

I have a kafka broker and a spring boot application in my Kubernetes cluster. They are running on their own containers.
The spring boot application is a message producer. It needs to access the kafkabroker to send the messages. But it couldn't access the Kafka broker by providing the Kafka's servicename:port in the producers bootstrap.servers
Any help would be greatly appreciated.
Zookeper and KafkaBroker configuration in yaml:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-service
name: zookeeper-service
namespace: mynamespace-k8s
spec:
type: NodePort
ports:
- name: zookeeper-port
port: 2181
nodePort: 30181
targetPort: 2181
selector:
app: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: zookeeper
name: zookeeper
namespace: mynamespace-k8s
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
spec:
containers:
- image: wurstmeister/zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper
ports:
- containerPort: 2181
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka-broker
name: kafka-service
namespace: mynamespace-k8s
spec:
ports:
- port: 9092
selector:
app: kafka-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kafka-broker
name: kafka-broker
namespace: mynamespace-k8s
spec:
replicas: 1
selector:
matchLabels:
app: kafka-broker
template:
metadata:
labels:
app: kafka-broker
spec:
hostname: kafka-broker
containers:
- env:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka-service.mynamespace-k8s
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-service.mynamespace-k8s:2181
- name: KAFKA_BROKER_ID
value: "1"
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
name: kafka-broker
ports:
- containerPort: 9092
My springboot application conf in yaml:
apiVersion: v1
kind: Service
metadata:
name: locationmanager-service
namespace: mynamespace-k8s
labels:
app: locationmanager
spec:
selector:
app: locationmanager
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32588
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: locationmanager-deployment
namespace: mynamespace-k8s
labels:
app: locationmanager
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: locationmanager
template:
metadata:
labels:
app: locationmanager
spec:
containers:
- name: locationmanager
image: aef/locmanager:latest
ports:
- containerPort: 8081
resources:
limits:
memory: "1Gi"
cpu: "1000m"
requests:
memory: "256Mi"
cpu: "500m"
env:
- name: CONFIG_KAFKA_BOOTSTRAP_SERVERS
value: kafka-service.mynamespace-k8s:9092
Spring boot's bootstrap.server in application.properties:
spring.kafka.producer.bootstrap-servers= ${CONFIG_KAFKA_BOOTSTRAP_SERVERS}
When springboot application tries to create a topic, I receive the exception below:
2022-07-07 10:51:50,078 ERROR o.s.k.c.KafkaAdmin [main] Could not configure topics
org.springframework.kafka.KafkaException: Timed out waiting to get existing topics; nested exception is java.util.concurrent.TimeoutException
at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$5(KafkaAdmin.java:275) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
at org.springframework.kafka.core.KafkaAdmin.checkPartitions(KafkaAdmin.java:254) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.addOrModifyTopicsIfNeeded(KafkaAdmin.java:240) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.initialize(KafkaAdmin.java:178) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.kafka.core.KafkaAdmin.afterSingletonsInstantiated(KafkaAdmin.java:145) ~[spring-kafka-2.8.4.jar!/:2.8.4]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:972) ~[spring-beans-5.3.18.jar!/:5.3.18]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:918) ~[spring-context-5.3.18.jar!/:5.3.18]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.18.jar!/:5.3.18]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:740) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:415) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:303) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1312) ~[spring-boot-2.6.6.jar!/:2.6.6]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1301) ~[spring-boot-2.6.6.jar!/:2.6.6]
at com.trendyol.locationmanager.LocationManagerApplication.main(LocationManagerApplication.java:24) ~[classes!/:0.0.1-SNAPSHOT]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[locationmanager.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:88) ~[locationmanager.jar:0.0.1-SNAPSHOT]
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1886) ~[?:?]
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2021) ~[?:?]
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180) ~[kafka-clients-3.0.1.jar!/:?]
at org.springframework.kafka.core.KafkaAdmin.lambda$checkPartitions$5(KafkaAdmin.java:257) ~[spring-kafka-2.8.4.jar!/:2.8.4]
... 23 more
There are several issues with your configuration, which are leading to the services not working correctly:
The zookeeper-service is of type NodePort. Therefore, there is no need to specify the port parameter. The traffic will be received on nodePort: 30181 and forwarded to the targetPort: 2181.
The kafka-broker service is not specifying any targetPort parameter. The service is of ClusterIP type by default which requires this parameter. You are receiving traffic on port: 9092 but you're not forwarding it to any pods. You need to add targetPort: 9092 which is the value of your KAFKA_PORT environment variable. This will correctly forward the incoming traffic to the right kafka-service pods.
The springboot application locationmanager is of type Loadbalancer therefore, there is no need to specify the nodePort parameter. Remove it. Additionally, the service is receiving traffic on port: 8080 and forwards it to the pods on targetPort: 8080. This is incorrect, since your application deployment exposes containerPort: 8081 instead of 8080.
Fixing these configuration issues will fix your problem.

How to connect with headless Mongo Service in kubernetes

apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
name: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
name: mongo
# environment: test
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: mongo-pv-storage
persistentVolumeClaim:
claimName: mongo-pv-claim
containers:
- name: mongo
image: mongo:4.0.12-xenial
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: mongo-pv-storage
mountPath: /data/db
I have used the above yaml. Mongo Db is running fine checked using kubectl exec command. Below yaml used to deploy spring boot application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: imageprocessor-app-backend
labels:
app: imageprocessor-app-backend
spec:
# modify replicas according to your case
selector:
matchLabels:
tier: imageprocessor-app-backend
template:
metadata:
labels:
tier: imageprocessor-app-backend
spec:
containers:
- name: imageprocessor-app-backend
image: imageprocessor-app-backend:v1
ports:
- containerPort: 8099
env:
- name: spring.data.mongodb.host
value: mongo-0.mongo
- name: spring.data.mongodb.port
value: "27017"
- name: spring.data.mongodb.database
value: testdb
---
apiVersion: v1
kind: Service
metadata:
name: imageprocessor-app-backend
spec:
type: NodePort
ports:
- port: 8099
nodePort: 31471
selector:
tier: imageprocessor-app-backend
The exception I am getting is
2019-09-24 12:27:04.902 INFO 1 --- [o-0.mongo:27017] org.mongodb.driver.cluster : Exception in monitor thread while connecting to server mongo-0.mongo:27017
com.mongodb.MongoSocketException: mongo-0.mongo: Try again
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:188) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.SocketStream.open(SocketStream.java:62) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) ~[mongodb-driver-core-3.8.2.jar:na]
at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) ~[mongodb-driver-core-3.8.2.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_212]
Caused by: java.net.UnknownHostException: mongo-0.mongo: Try again
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) ~[na:1.8.0_212]
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) ~[na:1.8.0_212]
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1324) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName0(InetAddress.java:1277) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName(InetAddress.java:1193) ~[na:1.8.0_212]
at java.net.InetAddress.getAllByName(InetAddress.java:1127) ~[na:1.8.0_212]
at java.net.InetAddress.getByName(InetAddress.java:1077) ~[na:1.8.0_212]
at com.mongodb.ServerAddress.getSocketAddress(ServerAddress.java:186) ~[mongodb-driver-core-3.8.2.jar:na]
... 5 common frames omitted
How to connect with the headless mongo service with my application. I tried using - name: spring.data.mongodb.host
value: mongo-0.mongo // and value: mongo
You need to use the name of the service as hostname. In your example, it's mongo. I deployed mongo with your above YAML and I could successfully connect to it from another pod in the same namespace.
If you're running imageprocessor-app-backend in a different namespace then mongo, then you have to add the namespace where mongo is running to the hostname: mongo.<namespace>, e.g. mongo.mongo.

Deploy Rest + gRPC server deploy to k8s with ingress

I have used a sample gRPC HelloWorld application https://github.com/grpc/grpc-go/tree/master/examples/helloworld. This example is running smoothly in local system.
I want to deploy it to kubernetes with use of Ingress.
Below are my config files.
service.yaml - as NodePort
apiVersion: v1
kind: Service
metadata:
name: grpc-scratch
labels:
run: grpc-scratch
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50051
protocol: TCP
targetPort: 50051
selector:
run: example-grpc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc
servicePort: 50051
I am unable to make gRPC request to the server with url xyz.com/grpc. Getting the error
{
"error": "14 UNAVAILABLE: Name resolution failure"
}
If I make request to xyz.com the error is
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Any help would be appreciated.
A backend of the ingress object is a combination of service and port names
In your case you have serviceName: grpc as a backend while your service's actual name is name: grpc-scratch
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc-scratch
servicePort: grpc

Deploying a Spring boot Application with Redis in Kubernetes--Jedis Connection Refused Error

While deploying to kubernetes , redis connection is not able to establish connection because of jedis connection refused error.
"message": "Cannot get Jedis connection; nested exception is
redis.clients.jedis.exceptions.JedisConnectionException:
java.net.ConnectException: Connection refused (Connection refused)",
Deployment yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
replicas: 1
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis-master
image: gcr.io/google_containers/redis:e2e
ports:
- containerPort: 6379
volumeMounts:
- name: redis-storage
mountPath: /data/redis
volumes:
- name: redis-storage
---
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
spec:
ports:
- port: 6379
selector:
app: redis
---Sample Jedis code used in project:
JedisConnectionFactory jedisConnectionFactoryUpdated() {
RedisStandaloneConfiguration redisStandaloneConfiguration = new RedisStandaloneConfiguration();
redisStandaloneConfiguration.setHostName("redis-master");
redisStandaloneConfiguration.setPort(6379);
JedisClientConfigurationBuilder jedisClientConfiguration = JedisClientConfiguration.builder();
jedisClientConfiguration.connectTimeout(Duration.ofSeconds(60));// 60s connection timeout
JedisConnectionFactory jedisConFactory = new JedisConnectionFactory(redisStandaloneConfiguration,
jedisClientConfiguration.build());
return jedisConFactory;
}
Does anybody overcome this issue? TIA.
You need to first update your service to reflect:
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
Once you have done so you can check whether or not your redis service is up and responding by using nmap. Here is an example using my nmap image:
kubectl run --image=appsoa/docker-alpine-nmap --rm -i -t nm -- -Pn 6379 redis-master
Also, make sure that both redis & your spring boot app are deployed to the same namespace. If not, you need to explicity define your hostname using . (i.e.: "redis-master.mynamespace").

Unable to access websocket over Kubernetes ingress

I have deployed two services to a Kubernetes Cluster on GCP:
One is a Spring Cloud Api Gateway implementation:
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: api-gateway
tier: web
type: NodePort
The other one is a backend chat service implementation which exposes a WebSocket at /ws/ path.
apiVersion: v1
kind: Service
metadata:
name: chat-api
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: chat
tier: web
type: NodePort
The API Gateway is exposed to internet through a Contour Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: api-gateway-tls
hosts:
- api.mydomain.com.br
rules:
- host: api.mydomain.com.br
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
The gateway routes incoming calls to /chat/ path to the chat service on /ws/:
#Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/chat/**")
.filters(f -> f.rewritePath("/chat/(?<segment>.*)", "/ws/(?<segment>.*)"))
.uri("ws://chat-api"))
.build();
}
When I try to connect to the WebSocket through the gateway I get a 403 error:
error: Unexpected server response: 403
I even tried to connect using http, https, ws and wss but the error remains.
Anyone has a clue?
I had the same issue using Ingress resource with Contour 0.5.0 but I managed to solve it by
upgrading Contour to v0.6.0-beta.3 with IngressRoute (be aware, though, that it's a beta version).
You can add an IngressRoute resource (crd) like this (remove your previous ingress resource):
#ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: api-gateway-ingress
namespace: default
spec:
virtualhost:
fqdn: api.mydomain.com.br
tls:
secretName: api-gateway-tls
routes:
- match: /
services:
- name: api-gateway
port: 80
- match: /chat
enableWebsockets: true # Setting this to true enables websocket for all paths that match /chat
services:
- name: api-gateway
port: 80
Then apply it
Websockets will be authorized only on the /chat path.
See here for more detail about Contour IngressRoute.

Resources