I have deployed two services to a Kubernetes Cluster on GCP:
One is a Spring Cloud Api Gateway implementation:
apiVersion: v1
kind: Service
metadata:
name: api-gateway
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: api-gateway
tier: web
type: NodePort
The other one is a backend chat service implementation which exposes a WebSocket at /ws/ path.
apiVersion: v1
kind: Service
metadata:
name: chat-api
spec:
ports:
- name: main
port: 80
targetPort: 8080
protocol: TCP
selector:
app: chat
tier: web
type: NodePort
The API Gateway is exposed to internet through a Contour Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-gateway-ingress
annotations:
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/cluster-issuer: "letsencrypt-prod"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: api-gateway-tls
hosts:
- api.mydomain.com.br
rules:
- host: api.mydomain.com.br
http:
paths:
- backend:
serviceName: api-gateway
servicePort: 80
The gateway routes incoming calls to /chat/ path to the chat service on /ws/:
#Bean
public RouteLocator routes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/chat/**")
.filters(f -> f.rewritePath("/chat/(?<segment>.*)", "/ws/(?<segment>.*)"))
.uri("ws://chat-api"))
.build();
}
When I try to connect to the WebSocket through the gateway I get a 403 error:
error: Unexpected server response: 403
I even tried to connect using http, https, ws and wss but the error remains.
Anyone has a clue?
I had the same issue using Ingress resource with Contour 0.5.0 but I managed to solve it by
upgrading Contour to v0.6.0-beta.3 with IngressRoute (be aware, though, that it's a beta version).
You can add an IngressRoute resource (crd) like this (remove your previous ingress resource):
#ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: api-gateway-ingress
namespace: default
spec:
virtualhost:
fqdn: api.mydomain.com.br
tls:
secretName: api-gateway-tls
routes:
- match: /
services:
- name: api-gateway
port: 80
- match: /chat
enableWebsockets: true # Setting this to true enables websocket for all paths that match /chat
services:
- name: api-gateway
port: 80
Then apply it
Websockets will be authorized only on the /chat path.
See here for more detail about Contour IngressRoute.
Related
Can we have a single ingress resource for deployment of all mulesoft applications in RTF in Self Managed Kubernetes on AWS?
Ingress template:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: temp1-svc
port:
number: 80
- pathType: Prefix
path: /
backend:
service:
name: temp2-svc
port:
number: 80
temp1-svc:
apiVersion: v1
kind: Service
metadata:
name: temp1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp1-svc
temp2-svc:
apiVersion: v1
kind: Service
metadata:
name: temp2-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: temp2-svc
I am new to RTF, any changes to be done in Ingress resource or do we need to have separate ingress resource for each application? Any help would be appreciated.
Thanks
Generally managing different, ingress if good option.
You can also use the single ingress routing and forwarding traffic across the cluster.
Single ingress for all services
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rtf-ingress
namespace: rtf
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/app-name(/|$)(.*) /$2 break;
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/enable-underscores-in-headers: "true"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
ingressClassName: rtf-nginx
rules:
- host: example.com
http:
paths:
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service
port:
number: 80
- pathType: ImplementationSpecific
path: /(.*)
backend:
service:
name: service-2
port:
number: 80
the benefit of multiple ingress resources or separate ingress is that you can keep and configure the different annotations to your ingress.
In one, you want to enable CORS while in another you want to change proxy body head etc. So it's better to manage ingress for each microservice.
I am trying to host my WebAPI service in kubernetes over HTTPS. The service has inbound communications over port 80 and outbound websocket communication over port 81. I have configured AKS Ingress controller with a static IP assigned to it.
The application is working just fine in HTTP protocol but when I enabled TLS certificate to enable HTTPS, the websocket communication is not working any more.
Ingress.yaml
apiVersion: extensions/v1beta1
metadata:
name: mywebapiservice
namespace: mywebapi
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: 'true'
kubernetes.io/ingress.class: mywebapi_customclass
kubernetes.io/tls-acme: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-connect-timeout: '7200'
nginx.ingress.kubernetes.io/proxy-read-timeout: '7200'
nginx.ingress.kubernetes.io/proxy-send-timeout: '7200'
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
tls:
- hosts:
- mywebapi.eastus.cloudapp.azure.com
secretName: my-tls-secret
rules:
- host: mywebapi.eastus.cloudapp.azure.com
http:
paths:
- path: >-
/app/api/ServiceA/subscription
backend:
serviceName: ServiceA
servicePort: 81
- path: >-
/app/api/ServiceA/(.+)
backend:
serviceName: ServiceA
servicePort: 80
- path: >-
/app/api/ServiceB/subscription
backend:
serviceName: ServiceB
servicePort: 81
- path: >-
/app/api/ServiceB/(.+)
backend:
serviceName: ServiceB
servicePort: 80
status:
loadBalancer:
ingress:
- ip: xx.xx.xxx.xxx
The c# code await listener.GetContextAsync() is never executed.
listener = new HttpListener();
listener.Prefixes.Add("wss://*:81/app/api/ServiceB/subscription/");
var listenerContext = await listener.GetContextAsync();
The error that I get at client side : Error during WebSocket handshake: Unexpected response code: 503
Please suggest if you find something wrong here.
I have used a sample gRPC HelloWorld application https://github.com/grpc/grpc-go/tree/master/examples/helloworld. This example is running smoothly in local system.
I want to deploy it to kubernetes with use of Ingress.
Below are my config files.
service.yaml - as NodePort
apiVersion: v1
kind: Service
metadata:
name: grpc-scratch
labels:
run: grpc-scratch
annotations:
service.alpha.kubernetes.io/app-protocols: '{"grpc":"HTTP2"}'
spec:
type: NodePort
ports:
- name: grpc
port: 50051
protocol: TCP
targetPort: 50051
selector:
run: example-grpc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc
servicePort: 50051
I am unable to make gRPC request to the server with url xyz.com/grpc. Getting the error
{
"error": "14 UNAVAILABLE: Name resolution failure"
}
If I make request to xyz.com the error is
{
"error": "14 UNAVAILABLE: Trying to connect an http1.x server"
}
Any help would be appreciated.
A backend of the ingress object is a combination of service and port names
In your case you have serviceName: grpc as a backend while your service's actual name is name: grpc-scratch
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grpc-ingress
annotations:
nginx.org/grpc-services: "grpc"
kubernetes.io/ingress.class: "nginx"
kubernetes.io/tls-acme: true
spec:
tls:
- hosts:
- xyz.com
secretName: grpc-secret
rules:
- host: xyz.com
http:
paths:
- path: /grpc
backend:
serviceName: grpc-scratch
servicePort: grpc
I am trying to configure ingress controller in openshift for one of my requirement. I need to route requests to different pods based on path. Found Ingress controller is suitable for my requirement. I have two services created and a ingress which routes to one of these services based on path. Here is my configuration. My app is in spring boot.
apiVersion: v1beta3
kind: List
items:
-
apiVersion: v1
kind: Service
metadata:
name: data-service-1
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7555
targetPort: 7555
selector:
name: data-service-1
-
apiVersion: v1
kind: Service
metadata:
name: data-service-2
annotations:
description: Exposes and load balances the data-indexer-service services
spec:
ports:
-
port: 7556
targetPort: 7556
selector:
name: data-service-2
-
apiVersion: v1
kind: Route
metadata:
name: data-service-2
spec:
host: doc.data.test.com
port:
targetPort: 7556
to:
kind: Service
name: data-service-2
-
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: entityreindexmap
spec:
rules:
- host: doc.data.test.com
http:
paths:
- path: /dbpath1
backend:
serviceName: data-service-1
servicePort: 7555
- path: /dbpath2
backend:
serviceName: data-service-2
servicePort: 7556
I couldn't get this working. I tried with doc.data.test.com/dbpath1 and doc.data.test.com/dbpath2. Any help is much appreciated.
I have an Ingress (the default GKE one) which is handling all the SSL before my services. One of my services is a WebSocket service (python autobahn). When I am exposing the service using LoadBalancer and not passing throw the ingress, using ws:// everything us working good. When instead I am exposing it using NodePort and passing through the ingress I am constantly seeing connections that are dropping even when no client is connecting. Here are the autobahnlogs:
WARNING:autobahn.asyncio.websocket.WebSocketServerProtocol:dropping connection to peer tcp:10.156.0.58:36868 with abort=False: None
When I connect using a client with wss:// the connection is successful but
a disconnection happens every few seconds (could not get a consistent number).
Although I do not think it is related I changed the timeout of the related backend service in GCE to 3600 sec and also tried to give it session affinity using both clientIP and cookie but none seems to stop the dropping connections.
Here is my ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ .Values.ingressName }}-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: {{ .Values.staticIpName }}-static-ip
labels:
oriient-app: "rest-api"
oriient-system: "IPS"
spec:
tls:
- secretName: sslcerts
rules:
- host: {{ .Values.restApiHost }}
http:
paths:
- backend:
serviceName: rest-api-internal-service
servicePort: 80
- host: {{ .Values.dashboardHost }}
http:
paths:
- backend:
serviceName: dashboard-internal-service
servicePort: 80
- host: {{ .Values.monitorHost }}
http:
paths:
- backend:
serviceName: monitor-internal-service
servicePort: 80
- host: {{ .Values.ipsHost }}
http:
paths:
- backend:
serviceName: server-internal-ws-service
servicePort: 80
The ws service is the "server-internal-ws-service".
Any suggestions?
I did not solve the issue, but I did walk around it by exposing my wss with a LoadBalancer service and I implemented the secure layer of WebSocket by myself.
I saved the certificate (the private key and full chain public key - pem format) as a secret and mounted it as volume and then in the python used the SSLContex to and passed it to the asyncio loop create server.
For creating the certificate secret create a yaml:
apiVersion: v1
kind: Secret
type: tls
metadata:
name: sslcerts
data:
# this is base64 of your pem fullchain and private key
tls.crt: XXX
tls.key: YYY
and then
kubectl apply -f [path to the yaml above]
In your server deployment mount the secret:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
...
name: server
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
...
template:
metadata:
labels:
...
spec:
volumes:
- name: wss-ssl-certificate
secret:
secretName: sslcerts
containers:
- image: ...
imagePullPolicy: Always
name: server
volumeMounts:
- name: wss-ssl-certificate
mountPath: /etc/wss
And in the python code:
sslcontext = ssl.SSLContext()
sslcontext.load_cert_chain(/etc/wss/tls.crt, /etc/wss/tls.key)
wssIpsClientsFactory = WebSocketServerFactory()
...
loop = asyncio.get_event_loop()
coro = loop.create_server(wssIpsClientsFactory, '0.0.0.0', 9000, ssl=sslcontext)
server = loop.run_until_complete(coro)
Hope it helps someone