How can I enable SSL for Kubernetes services? - spring-boot

I have a Google Kubernetes Engine with a spring boot application that has a public service endpoint. But I would like to change the endpoint from http://... to a secure https://... .
For example:
https://xx.xxx.xx.xxxx:8085/getAllStudent instead of http://xx.xxx.xx.xxxx:8085/getAllStudent
How can i solve this?

It looks like you are using then NodePort type of your services. If you want to accept HTTPS over this port your service behind it simply needs to open a HTTPS server instead of an HTTP server.
Using NodePort like this is not a recommend way but rather use the proper Ingress functionality in Kubernetes in order to expose a service over a host name. Ingress then supports supplying an SSL certificate that can be used to encrypt traffic over HTTPS.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
rules:
- host: sslexample.foo.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80

Related

How to exposing CockroachDB using Ingress on Google Cloud for external loadtesting

For my current distributed databases project in my studies I should deploy a CockrouchDB Cluster on Google Cloud Kubernetes Engine and run a YCSB Loadtest against it.
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
Now I need to expose the DB Console on Port 8080 as well as the Database Endpoint on Port 26257.
so far I started changing the cockraochdb-public service to kind: NodePort and exposing its ports using an Ingress. My current Problem is exposing both ports (if possible on their default ports 8080 and 26257) and having them accessible from YCSB.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
labels:
app: cockroachdb
spec:
backend:
serviceName: cockroachdb-public
servicePort: 26257
rules:
- http:
paths:
- path: /labs/*
backend:
serviceName: cockroachdb-public
servicePort: 8080
- path: /*
backend:
serviceName: cockroachdb-public
servicePort: 26257
So far I just managed to route it to different paths. I'm not sure if this may work, because the JDBC driver used by YCSB is using TCP not http.
How do I expose two ports of one service using an Ingress for TCP?
Focusing on:
How do I expose two ports of one service using an Ingress for TCP?
In general when an Ingress resource is referenced it's for HTTP/HTTPS traffic.
You cannot expose the TCP traffic with an Ingress like the one mentioned in your question.
Side note!
There are some options to use an Ingress controller to pass the TCP/UDP traffic (nginx-ingress).
You could expose your application with service of type LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: cockroach-db-cockroachdb-public
namespace: default
spec:
ports:
- name: grpc
port: 26257
protocol: TCP
targetPort: grpc # (containerPort: 26257)
- name: http
port: 8080
protocol: TCP
targetPort: http # (containerPort: 8080)
selector:
selector: INSERT-SELECTOR-FROM-YOUR-APP
type: LoadBalancer
Disclaimer!
Above example is taken from cockroachdb Helm Chart with modified value:
service.public.type="LoadBalancer"
By above definition you will expose your Pods to external traffic on ports: 8080 and 26257 with a TCP/UDP LoadBalancer. You can read more about it by following below link:
Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps: Creating a service of type LoadBalancer
The YCSB Client is going to run on another VM so that the results are comparable to other groups results.
If this VM is located in GCP infrastructure you could also take a look on Internal TCP/UDP LoadBalancer:
Cloud.google.com: Kubernetes Engine: Using an internal TCP/UDP load balancer
Also I'm not sure about the annotations of your Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cockroachdb-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: cockroachdb-global-ip
ingress.citrix.com/insecure-service-type: “tcp”
ingress.citrix.com/insecure-port: “6379”
In GKE when you are creating an Ingress without specifying the ingress.class you are using: gce controller. The ingress.citrix.com annotations are specific to citrix controller and will not work with gce controller.
Additional resources:
Kubernetes.io: Docs: Ingress
Kubernetes.io: Docs: Service

Not able to call external resources through kubernetes ingress

I am trying to configure ingress resources in kubernetes, I want to know if I can access external resources via kuberntes(Example, I installed kibana in a virtual machine and I want to access through kubernetes ingress as below)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/add-base-url: "true"
spec:
rules:
- host: test.com
http:
paths:
- path: "/"
backend:
serviceName: service1
servicePort: 1000
- path: "/test"
backend:
serviceName: service2.test
servicePort: 2000
- path: "/kibana"
backend:
serviceName: <ip-address>
servicePort: 9092
Any suggested is this the right way of calling external resources(or) we cannot initiate a call as it is outside of kubernetes...
I am trying to call as test.com/kibana
Please suggest.
For external resources you should create Endpoints object.
This is explained with Services without selectors
Services most commonly abstract access to Kubernetes Pods, but they can also abstract other kinds of backends. For example:
You want to have an external database cluster in production, but in your test environment you use your own databases.
You want to point your Service to a Service in a different Namespace or on another cluster.
You are migrating a workload to Kubernetes. Whilst evaluating the approach, you run only a proportion of your backends in Kubernetes.
In any of these scenarios you can define a Service without a Pod selector. For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
Because this Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
So once you add the Endpoint setup a Service for it, you will be able to use is inside Ingress.

https for eks loadbalancer

I want to secure my web application running on Kubernetes (EKS).
I have one front-end service .Front end service is running on port 80 .I want to run this on port 443 .When I kubectl get all .I see that my load balancer is running on port 443 , but I am not able to open it in the browser.
---
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234567890:certificate/12345c409-ec32-41a8-8542-712345678
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 80
protocol: TCP
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: 123456789.dkr.ecr.us-west-2.amazonaws.com/demoui:demo123
ports:
- containerPort: 80
env:
- name: MESSAGE
value: Hello Kubernetes!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200,404"
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80} , {"HTTPS": 443}]'
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: hello-kubernetes
servicePort: 80
AWS ALB Ingress Controller is designed to create application Load Balancer and relevant resources on AWS level within Ingress YAML configuration file. Actually, ALB Ingress controller parses configuration for the load balancer from the Ingress YAML definition file and then apply Target groups one per Kubernetes service with specified instances and NodePorts exposed on a particular nodes. On the top level Listeners expose connection port for Load Balancer and make decision for request routing according to defined routing rules as per official AWS ALB Ingress Controller Workflow documentation.
Just after a short theory tour, I have a few concerns about you current configuration:
First, I would recommend to check AWS ALB Ingress Controller
setup and inspect the relevant logs:
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "alb-ingress[a-zA-Z0-9-]+")
And then verify whether Load Balancer has been successfully generated within AWS console.
Inspect Target groups for particular ALB in order to ensure whether
health checks for k8s instances all are good.
Ensure, whether Security groups contain appropriate firewall rules for your instances in order to allow inbound and outbound network traffic across ALB.
I encourage you to get familiar with dedicated chapter about HTTP to HTTPS redirection in the official AWS ALB Ingress Controller documentation.
Here is what I have for my cluster to run on https.
In my ingress/Load balancer:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: CERT
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
# ports using the ssl certificate
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# which protocol a Pod speaks
In my Ingress controller, configMap of the nginx configuration:
app.kubernetes.io/force-ssl-redirect: "true"
Hope this works for you.
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/tasks/ssl_redirect/

Kubernetes https ingress 400 response

I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).
I have an ingress in a different namespace that attempts to use the nginx controller.
#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on custom-http-headers then I get a 404 from nginx; my service is not being hit. According to re-write logging, the url is being re-written correctly.
I have also hit the service directly from inside the pod, and that works as well.
#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
What could be going wrong?
EDIT: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)
Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.

Websockets load balancing with HAProxy

I'm trying to configure an HAProxy ingress controller to load-balance properly connections to websocket. I tried to raise the value timeout-client, timeout-server and timeout-connect but without success.
ingress.yaml
kind: Ingress
metadata:
namespace: test-deploy
name: app-test
labels:
app: app-test
annotations:
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/timeout-connect: "5000"
ingress.kubernetes.io/timeout-client: "5000"
ingress.kubernetes.io/timeout-server: "5000"
ingress.kubernetes.io/timeout-tunnel: "3600"
spec:
rules:
- host: k8s-test.local.lan
http:
paths:
- path: /app-test
backend:
serviceName: app-test
servicePort: 9000
I haven't found confirmation about websockets support in the HAProxy documentation, but this post on Quora stated that it works great. You may need to adjust client/server/tulnnel timeouts and sometimes match and route websockets traffic to the correct backend destination.
You can check actual haproxy-ingress confgiration using the following command:
kubectl exec -ti haproxy-ingress-pod-name -n ingress-controller -- cat /etc/haproxy/haproxy.cfg
If you have more than one ingress in the cluster you may need to specify a proxy class in the annotation for every Ingress object that should be used by HAProxy ingress:
kubernetes.io/ingress.class: "haproxy"
HAProxy ingress is pretty much the same HAProxy with the capability to use Kubernetes Ingress objects to update it's configuration.
You can find more information about configuring HAProxy and HAProxy Ingress in the articles:
Websockets Load Balancing with HAProxy
Using HAProxy as an API Gateway, Part 1 [Introduction]
Haproxy Ingres
Voyager
Hope it would be helpful to you.

Resources