I'm trying to configure an HAProxy ingress controller to load-balance properly connections to websocket. I tried to raise the value timeout-client, timeout-server and timeout-connect but without success.
ingress.yaml
kind: Ingress
metadata:
namespace: test-deploy
name: app-test
labels:
app: app-test
annotations:
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/timeout-connect: "5000"
ingress.kubernetes.io/timeout-client: "5000"
ingress.kubernetes.io/timeout-server: "5000"
ingress.kubernetes.io/timeout-tunnel: "3600"
spec:
rules:
- host: k8s-test.local.lan
http:
paths:
- path: /app-test
backend:
serviceName: app-test
servicePort: 9000
I haven't found confirmation about websockets support in the HAProxy documentation, but this post on Quora stated that it works great. You may need to adjust client/server/tulnnel timeouts and sometimes match and route websockets traffic to the correct backend destination.
You can check actual haproxy-ingress confgiration using the following command:
kubectl exec -ti haproxy-ingress-pod-name -n ingress-controller -- cat /etc/haproxy/haproxy.cfg
If you have more than one ingress in the cluster you may need to specify a proxy class in the annotation for every Ingress object that should be used by HAProxy ingress:
kubernetes.io/ingress.class: "haproxy"
HAProxy ingress is pretty much the same HAProxy with the capability to use Kubernetes Ingress objects to update it's configuration.
You can find more information about configuring HAProxy and HAProxy Ingress in the articles:
Websockets Load Balancing with HAProxy
Using HAProxy as an API Gateway, Part 1 [Introduction]
Haproxy Ingres
Voyager
Hope it would be helpful to you.
Related
For my current distributed databases project in my studies I should deploy a CockrouchDB Cluster on Google Cloud Kubernetes Engine and run a YCSB Loadtest against it.
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
Now I need to expose the DB Console on Port 8080 as well as the Database Endpoint on Port 26257.
so far I started changing the cockraochdb-public service to kind: NodePort and exposing its ports using an Ingress. My current Problem is exposing both ports (if possible on their default ports 8080 and 26257) and having them accessible from YCSB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
labels:
app: cockroachdb
spec:
backend:
serviceName: cockroachdb-public
servicePort: 26257
rules:
- http:
paths:
- path: /labs/*
backend:
serviceName: cockroachdb-public
servicePort: 8080
- path: /*
backend:
serviceName: cockroachdb-public
servicePort: 26257
So far I just managed to route it to different paths. I'm not sure if this may work, because the JDBC driver used by YCSB is using TCP not http.
How do I expose two ports of one service using an Ingress for TCP?
Focusing on:
How do I expose two ports of one service using an Ingress for TCP?
In general when an Ingress resource is referenced it's for HTTP/HTTPS traffic.
You cannot expose the TCP traffic with an Ingress like the one mentioned in your question.
Side note!
There are some options to use an Ingress controller to pass the TCP/UDP traffic (nginx-ingress).
You could expose your application with service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: cockroach-db-cockroachdb-public
namespace: default
spec:
ports:
- name: grpc
port: 26257
protocol: TCP
targetPort: grpc # (containerPort: 26257)
- name: http
port: 8080
protocol: TCP
targetPort: http # (containerPort: 8080)
selector:
selector: INSERT-SELECTOR-FROM-YOUR-APP
type: LoadBalancer
Disclaimer!
Above example is taken from cockroachdb Helm Chart with modified value:
service.public.type="LoadBalancer"
By above definition you will expose your Pods to external traffic on ports: 8080 and 26257 with a TCP/UDP LoadBalancer. You can read more about it by following below link:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps: Creating a service of type LoadBalancer
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
If this VM is located in GCP infrastructure you could also take a look on Internal TCP/UDP LoadBalancer:
Cloud.google.com: Kubernetes Engine: Using an internal TCP/UDP load balancer
Also I'm not sure about the annotations of your Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
In GKE when you are creating an Ingress without specifying the ingress.class you are using: gce controller. The ingress.citrix.com annotations are specific to citrix controller and will not work with gce controller.
Additional resources:
Kubernetes.io: Docs: Ingress
Kubernetes.io: Docs: Service
I am trying to make the websocket service work on Azure Kubernetes Cluster on our organization environment.
My existing environment also have REST api and Angular application working on ingress with ssl.
But when I added the websocket service on the ingress it is not working.
So, I tried to use Azure Free Subscription to first implement the same WITHOUT SSL. For my applications I enabled Http Routing and using the annotation addon-http-application-routing.
I am getting below error.
'ws://40.119.7.246/ws' failed: Error during WebSocket handshake: Unexpected response code: 404
Please help in validating where I am doing wrong?
Below are the details of the configuration.
Dockerfile
FROM node:alpine
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
RUN npm run build
CMD ["node","./dist/server.js"]
EXPOSE 8010
socketserver.yaml - Contains Demployment & Service.
apiVersion: apps/v1
kind: Deployment
metadata:
name: socketserver
spec:
replicas: 1
selector:
matchLabels:
app: socketserver
template:
metadata:
labels:
app: socketserver
spec:
containers:
- name: socketserver
image: regkompella.azurecr.io/socketserver:1.0.0
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8010
imagePullSecrets:
- name: regkompella-azurecr-io
---
apiVersion: v1
kind: Service
metadata:
name: socketserver-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8010
selector:
app: socketserver
type: ClusterIP
---
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 10m
nginx.ingress.kubernetes.io/websocket-services: socketserver-svc
nginx.org/websocket-services: socketserver-svc
spec:
rules:
- host: demosocket.com
- http:
paths:
- path: /
backend:
serviceName: angular-application-svc
servicePort: 80
- path: /ws
backend:
serviceName: socketserver-svc
servicePort: 80
After reading through a lot of articles and referring some of the github forums (Added referenced articles below). I come to a point where my websocket implementation started working after doing the two things. I am not sure yet if, this is the right way to do it or not. I achieved to this solution purely on trail and error method. Hence, I request everyone who have good grasp, kindly suggest if there is a better way to solve my problem. Always take my steps with a pinch of salt.
Installed the NGINX Ingress controller from the link.
As I am using Azure Kubernetes Services, I applied the below yaml from the document.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml
Next I made necessary changes to my demo-ingress configuration for my services.
I come to know that kubernetes.io/ingress.class: addon-http-application-routing annotation doesn't support websocketing. So, had to disable it.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
# this one annotation is making the websocket work.
nginx.ingress.kubernetes.io/websocket-services: socketserver-svc
# this one I left as-is. And not playing any role for this websocket
# implementation to work
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/proxy-body-size: 10m
# I thought sticky session is also required for websocket to work. But seems
# this has no effect after I installed nginx ingress controller.
# so disabled all the below annotations also.
#nginx.org/websocket-services: socketserver-svc
#nginx.ingress.kubernetes.io/affinity: cookie
#nginx.ingress.kubernetes.io/affinity-mode: balanced
#nginx.ingress.kubernetes.io/session-cookie-samesite: Strict
#kubernetes.io/ingress.class: nginx
#kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: demosocket.com
- http:
paths:
- path: /ws
backend:
serviceName: socketserver-svc
servicePort: 80
I tried to access through the public IP address. And I can be able to successfully send and receive messages.
ws://52.188.38.118/ws
Now, what if I want to make the websocket implementation work without installing NGINX Ingress Controller ( indicated on step 1) and want to try to use default ingress controller coming with AKS/minikube. The answer is below.
From the steps above,
a) Avoid Step 1: Installing NGINX Ingress Controller.
b) Only change that need to be made on ingress is below. Use the below annotations instead of the annotations indicated on Step 2 on the ingress yaml file. Things will start working.
# this annotation is making my web application also work if I plan to configure something in future.
nginx.ingress.kubernetes.io/ingress.class: nginx
# this one annotation is making the websocket work.
nginx.ingress.kubernetes.io/websocket-services: socketserver-svc
# by default ssl is true - as I am trying locally and want to disable ssl-# redirect. So set this to false.
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# Below are just additional annotation to allow CORS etc.
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, PUT, POST, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/proxy-body-size: 10m
Referenced Articles:
https://medium.com/flant-com/comparing-ingress-controllers-for-kubernetes-9b397483b46b
https://kubernetes.github.io/ingress-nginx/deploy/#azure
Mr. dstrebel's comments -> https://github.com/Azure/AKS/issues/768
I typically recommend just setting up a Ingress Controller on the cluster and not enabling "http-application-routing", as there's a lot of limitations to it. The goal with HTTP Application ROuting was for users to get setup quickly with Ingress, but not really for production deployments due to the limitations of the configuration.
DenisBiondic commented on Oct 2, 2018 -> https://github.com/Azure/AKS/issues/672
I am not 100% certain, since I don't use application routing feature, but >I think it does not use the https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/websocket controller but rather the https://github.com/kubernetes/ingress-nginx. In case of the latter, I think enabling session affinity with cookies might be enough: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#session-affinity
In your case you are using the wrong annotation which does not work with application routing ingress controller under the hood.
I welcome suggestions and best practices.
I want to secure my web application running on Kubernetes (EKS).
I have one front-end service .Front end service is running on port 80 .I want to run this on port 443 .When I kubectl get all .I see that my load balancer is running on port 443 , but I am not able to open it in the browser.
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234567890:certificate/12345c409-ec32-41a8-8542-712345678
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: 123456789.dkr.ecr.us-west-2.amazonaws.com/demoui:demo123
ports:
- containerPort: 80
env:
- name: MESSAGE
value: Hello Kubernetes!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200,404"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80} , {"HTTPS": 443}]'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello-kubernetes
servicePort: 80
AWS ALB Ingress Controller is designed to create application Load Balancer and relevant resources on AWS level within Ingress YAML configuration file. Actually, ALB Ingress controller parses configuration for the load balancer from the Ingress YAML definition file and then apply Target groups one per Kubernetes service with specified instances and NodePorts exposed on a particular nodes. On the top level Listeners expose connection port for Load Balancer and make decision for request routing according to defined routing rules as per official AWS ALB Ingress Controller Workflow documentation.
Just after a short theory tour, I have a few concerns about you current configuration:
First, I would recommend to check AWS ALB Ingress Controller
setup and inspect the relevant logs:
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
And then verify whether Load Balancer has been successfully generated within AWS console.
Inspect Target groups for particular ALB in order to ensure whether
health checks for k8s instances all are good.
Ensure, whether Security groups contain appropriate firewall rules for your instances in order to allow inbound and outbound network traffic across ALB.
I encourage you to get familiar with dedicated chapter about HTTP to HTTPS redirection in the official AWS ALB Ingress Controller documentation.
Here is what I have for my cluster to run on https.
In my ingress/Load balancer:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: CERT
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
# ports using the ssl certificate
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# which protocol a Pod speaks
In my Ingress controller, configMap of the nginx configuration:
app.kubernetes.io/force-ssl-redirect: "true"
Hope this works for you.
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/
How do I properly change the TCP keepalive time for node-proxy?
I am running Kubernetes in Google Container Engine and have set up an ingress backed by HTTP(S) Google Load Balancer. When I continuously make POST requests to the ingress, I get a 502 error exactly once every 80 seconds or so. backend_connection_closed_before_data_sent_to_client error in Cloud Logging, which is because GLB's tcp keepalive (600 seconds) is larger than node-proxy's keepalive (no clue what it is).
The logged error is detailed in https://cloud.google.com/compute/docs/load-balancing/http/.
Thanks!
You can use the custom resource BackendConfig that exist on each GKE cluster to configure timeouts and other parameters like CDN here is the documentacion
An example from here shows how to configure on the ingress
That is the BackendConfig definition:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
And this is how to use on the ingress definition through annotations
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
just for the sake of understanding, when you use Google solution to load-balance and manage your Kubernetes Ingress, you will have GLBC pods running in kube-system namespace.
You can check it out with :
kubectl -n kube-system get po
These pods are intended to route the incoming traffic from the actual Google Load Balancer.
I think that the timeouts should be configured there, on GLBC. You should check what annotations or ConfigMap GLBC can take to be configured, if any.
You can find details there :
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc
https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md
https://github.com/kubernetes/ingress/blob/master/controllers/gce/rc.yaml#L64
Personally I prefer to use the Nginx Ingress Controller for now, and it has necessary annotations and ConfigMap support.
See :
https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md
I'm trying to run a socket.io app using Google Container Engine. I've setup the ingress service which creates a Google Load Balancer that points to the cluster. If I have one pod in the cluster all works well. As soon as I add more, I get tons of socket.io errors. It looks like the connections end up going to different pods in the cluster and I suspect that is the problem with all the polling and upgrading socket.io is doing.
I setup the load balancer to use sticky sessions based on IP.
Does this only mean that it will have affinity to a particular NODE in the kubernetes cluster and not a POD?
How can I set it up to ensure session affinity to a particular POD in the cluster?
NOTE: I manually set the sessionAffinity on the cloud load balancer.
Here would be my ingress yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: my-static-ip
spec:
backend:
serviceName: my-service
servicePort: 80
Service
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: myApp
spec:
sessionAffinity: ClientIP
type: NodePort
ports:
- port: 80
targetPort: http-port
selector:
app: myApp
First off, you need to set "sessionAffinity" at the Ingress resource level, not your load balancer (this is only related to a specific node in the target group):
Here is an example Ingress spec:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-test-sticky
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
rules:
- host: $HOST
http:
paths:
- path: /
backend:
serviceName: $SERVICE_NAME
servicePort: $SERVICE_PORT
Second, you probably need to tune your ingress-controller to allow longer connection times. Everything else, by default, supports websocket proxying.
If you are still having issues please provide outputs for kubectl describe -oyaml pod/<ingress-controller-pod> and kubectl describe -oyaml ing/<your-ingress-name>
Hope this helps, good luck!