Changing Kubernetes' node-proxy tcp keepalive time - proxy

How do I properly change the TCP keepalive time for node-proxy?
I am running Kubernetes in Google Container Engine and have set up an ingress backed by HTTP(S) Google Load Balancer. When I continuously make POST requests to the ingress, I get a 502 error exactly once every 80 seconds or so. backend_connection_closed_before_data_sent_to_client error in Cloud Logging, which is because GLB's tcp keepalive (600 seconds) is larger than node-proxy's keepalive (no clue what it is).
The logged error is detailed in https://cloud.google.com/compute/docs/load-balancing/http/.
Thanks!

You can use the custom resource BackendConfig that exist on each GKE cluster to configure timeouts and other parameters like CDN here is the documentacion
An example from here shows how to configure on the ingress
That is the BackendConfig definition:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
connectionDraining:
drainingTimeoutSec: 60
And this is how to use on the ingress definition through annotations
apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080

just for the sake of understanding, when you use Google solution to load-balance and manage your Kubernetes Ingress, you will have GLBC pods running in kube-system namespace.
You can check it out with :
kubectl -n kube-system get po
These pods are intended to route the incoming traffic from the actual Google Load Balancer.
I think that the timeouts should be configured there, on GLBC. You should check what annotations or ConfigMap GLBC can take to be configured, if any.
You can find details there :
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc
https://github.com/kubernetes/ingress/blob/master/controllers/gce/README.md
https://github.com/kubernetes/ingress/blob/master/controllers/gce/rc.yaml#L64
Personally I prefer to use the Nginx Ingress Controller for now, and it has necessary annotations and ConfigMap support.
See :
https://github.com/kubernetes/ingress/blob/master/controllers/nginx/README.md

Related

How to exposing CockroachDB using Ingress on Google Cloud for external loadtesting

For my current distributed databases project in my studies I should deploy a CockrouchDB Cluster on Google Cloud Kubernetes Engine and run a YCSB Loadtest against it.
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
Now I need to expose the DB Console on Port 8080 as well as the Database Endpoint on Port 26257.
so far I started changing the cockraochdb-public service to kind: NodePort and exposing its ports using an Ingress. My current Problem is exposing both ports (if possible on their default ports 8080 and 26257) and having them accessible from YCSB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
labels:
app: cockroachdb
spec:
backend:
serviceName: cockroachdb-public
servicePort: 26257
rules:
- http:
paths:
- path: /labs/*
backend:
serviceName: cockroachdb-public
servicePort: 8080
- path: /*
backend:
serviceName: cockroachdb-public
servicePort: 26257
So far I just managed to route it to different paths. I'm not sure if this may work, because the JDBC driver used by YCSB is using TCP not http.
How do I expose two ports of one service using an Ingress for TCP?
Focusing on:
How do I expose two ports of one service using an Ingress for TCP?
In general when an Ingress resource is referenced it's for HTTP/HTTPS traffic.
You cannot expose the TCP traffic with an Ingress like the one mentioned in your question.
Side note!
There are some options to use an Ingress controller to pass the TCP/UDP traffic (nginx-ingress).
You could expose your application with service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: cockroach-db-cockroachdb-public
namespace: default
spec:
ports:
- name: grpc
port: 26257
protocol: TCP
targetPort: grpc # (containerPort: 26257)
- name: http
port: 8080
protocol: TCP
targetPort: http # (containerPort: 8080)
selector:
selector: INSERT-SELECTOR-FROM-YOUR-APP
type: LoadBalancer
Disclaimer!
Above example is taken from cockroachdb Helm Chart with modified value:
service.public.type="LoadBalancer"
By above definition you will expose your Pods to external traffic on ports: 8080 and 26257 with a TCP/UDP LoadBalancer. You can read more about it by following below link:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps: Creating a service of type LoadBalancer
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
If this VM is located in GCP infrastructure you could also take a look on Internal TCP/UDP LoadBalancer:
Cloud.google.com: Kubernetes Engine: Using an internal TCP/UDP load balancer
Also I'm not sure about the annotations of your Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
In GKE when you are creating an Ingress without specifying the ingress.class you are using: gce controller. The ingress.citrix.com annotations are specific to citrix controller and will not work with gce controller.
Additional resources:
Kubernetes.io: Docs: Ingress
Kubernetes.io: Docs: Service

Call a rest api from one pod to another pod in same kubernetes cluster

In my k8s cluster I have two pods podA and podB. Both are in same k8s cluster. Microservice on pod B is a spring boot rest api. Microservice on pod A have ip and port of pod B in its application.yaml. now every time when podB recreates, ip change which forces us to change ip in application.yml of podA. Please suggest a better way.
My limitation is : I can't change the code of podA.
A Service will provide a consistent DNS name for accessing Pods.
An application should never address a Pod directly unless you have a specific reason to (custom load balancing is one I can think of, or StatefulSets where pods have an identity).
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
You will then have a consistent DNS name to access any Pods that match the selector:
my-service.default.svc.cluster.local
That's what service are for. Take a postgres service:
kind: Service
apiVersion: v1
metadata:
name: postgres-service
spec:
type: ClusterIP
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can use postgres-service in other pods instead of referring to the ip address of the pod. You also have the advantage that k8s's doing some load balancing for you as well.

https for eks loadbalancer

I want to secure my web application running on Kubernetes (EKS).
I have one front-end service .Front end service is running on port 80 .I want to run this on port 443 .When I kubectl get all .I see that my load balancer is running on port 443 , but I am not able to open it in the browser.
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234567890:certificate/12345c409-ec32-41a8-8542-712345678
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: 123456789.dkr.ecr.us-west-2.amazonaws.com/demoui:demo123
ports:
- containerPort: 80
env:
- name: MESSAGE
value: Hello Kubernetes!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200,404"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80} , {"HTTPS": 443}]'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello-kubernetes
servicePort: 80
AWS ALB Ingress Controller is designed to create application Load Balancer and relevant resources on AWS level within Ingress YAML configuration file. Actually, ALB Ingress controller parses configuration for the load balancer from the Ingress YAML definition file and then apply Target groups one per Kubernetes service with specified instances and NodePorts exposed on a particular nodes. On the top level Listeners expose connection port for Load Balancer and make decision for request routing according to defined routing rules as per official AWS ALB Ingress Controller Workflow documentation.
Just after a short theory tour, I have a few concerns about you current configuration:
First, I would recommend to check AWS ALB Ingress Controller
setup and inspect the relevant logs:
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
And then verify whether Load Balancer has been successfully generated within AWS console.
Inspect Target groups for particular ALB in order to ensure whether
health checks for k8s instances all are good.
Ensure, whether Security groups contain appropriate firewall rules for your instances in order to allow inbound and outbound network traffic across ALB.
I encourage you to get familiar with dedicated chapter about HTTP to HTTPS redirection in the official AWS ALB Ingress Controller documentation.
Here is what I have for my cluster to run on https.
In my ingress/Load balancer:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: CERT
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
# ports using the ssl certificate
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# which protocol a Pod speaks
In my Ingress controller, configMap of the nginx configuration:
app.kubernetes.io/force-ssl-redirect: "true"
Hope this works for you.
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/

Websockets load balancing with HAProxy

I'm trying to configure an HAProxy ingress controller to load-balance properly connections to websocket. I tried to raise the value timeout-client, timeout-server and timeout-connect but without success.
ingress.yaml
kind: Ingress
metadata:
namespace: test-deploy
name: app-test
labels:
app: app-test
annotations:
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/timeout-connect: "5000"
ingress.kubernetes.io/timeout-client: "5000"
ingress.kubernetes.io/timeout-server: "5000"
ingress.kubernetes.io/timeout-tunnel: "3600"
spec:
rules:
- host: k8s-test.local.lan
http:
paths:
- path: /app-test
backend:
serviceName: app-test
servicePort: 9000
I haven't found confirmation about websockets support in the HAProxy documentation, but this post on Quora stated that it works great. You may need to adjust client/server/tulnnel timeouts and sometimes match and route websockets traffic to the correct backend destination.
You can check actual haproxy-ingress confgiration using the following command:
kubectl exec -ti haproxy-ingress-pod-name -n ingress-controller -- cat /etc/haproxy/haproxy.cfg
If you have more than one ingress in the cluster you may need to specify a proxy class in the annotation for every Ingress object that should be used by HAProxy ingress:
kubernetes.io/ingress.class: "haproxy"
HAProxy ingress is pretty much the same HAProxy with the capability to use Kubernetes Ingress objects to update it's configuration.
You can find more information about configuring HAProxy and HAProxy Ingress in the articles:
Websockets Load Balancing with HAProxy
Using HAProxy as an API Gateway, Part 1 [Introduction]
Haproxy Ingres
Voyager
Hope it would be helpful to you.

assign static IP to LoadBalancer service using k8s on aws

Objective: create a k8s LoadBalancer service on AWS whose IP is static
I have no problem accomplishing this on GKE by pre-allocating a static IP and passing it in via loadBalancerIP attribute:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: dave
loadBalancerIP: 17.18.19.20
...etc...
But doing same in AWS results in externalIP stuck as <pending> and an error in the Events history
Removing the loadBalancerIP value allows k8s to spin up a Classic LB:
$ kubectl describe svc dave
Type: LoadBalancer
IP: 100.66.51.123
LoadBalancer Ingress: ade4d764eb6d511e7b27a06dfab75bc7-1387147973.us-west-2.elb.amazonaws.com
...etc...
but AWS explicitly warns me that the IPs are ephemeral (there's sometimes 2), and Classic IPs don't seem to support attaching static IPs
Thanks for your time
as noted by #Quentin, AWS Network Load Balancer now supports K8s
https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/
Network Load Balancing in Kubernetes
Included in the release of Kubernetes 1.9, I added support for using the new Network Load Balancer with Kubernetes services. This is an alpha-level feature, and as of today is not ready for production clusters or workloads, so make sure you also read the documentation on NLB before trying it out. The only requirement to expose a service via NLB is to add the annotation service.beta.kubernetes.io/aws-load-balancer-type with the value of nlb.
A full example looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer

Resources