HTTPS Load Balancer to expose a Workload on Google Kubernetes - https

I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)

You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.

Related

easiest way to expose an elastic load balancer to the internet

I've deployed grafana to to an AWS EKS cluster and I want to be able to access it from a web browser, if I create a Kubernetes service type of LoadBalancer, based on the very limited AWS networking knowledge I have, I know that this maps to an elastic load balancer, I can get the name of this, go to network and security -> network interfaces and get all the interfaces associated with this, one for each EC2 instance. Presuming its the public ip address associated with each ELB network interface I need to arrange access in order to access my grafana service, and again my AWS networking knowledge is very lacking, what is the fastest and easiest way for me to make the grafana Kubernetes service accessible via my web browser.
The Easiest way to expose any app running on kubernetes is to create a ServiceType as LoadBalancer.
I myself use the same for some of the services to get the things quickly up and running.
To get the loadbalancer name I do
kubectl get svc
which will give me the loadbalancer FQDN. I then map it to a DNS.
The other way which I use is to deploy the nginx-ingress-controller.
https://kubernetes.github.io/ingress-nginx/deploy/#aws
This creates a ServiceType as LoadBalancer.
I then create the Ingress which will be mapped to the ingress controller elb.
https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
I use this for all my apps running with one loadbalancer mapping it to one elb using the nginx-ingress-controller.
In my specific scenario, the solution was to open up the port for the LoadBalancer service via a port.

Redirect requests to particular replica in kubernetes

I am new to Kubernetes.
If there is any service deployed using EKS having 4 replicas A,B,C,D.
Usually loadbalancer directs requests to these replicas
But if I want that my request should go to replica A only or B only...
How can we achieve it.
Request to share some links or steps for guidance
What you could use are the Headless Services:
Sometimes you don't need load-balancing and a single Service IP. In
this case, you can create what are termed "headless" Services, by
explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service
discovery mechanisms, without being tied to Kubernetes'
implementation.
For headless Services, a cluster IP is not allocated, kube-proxy
does not handle these Services, and there is no load balancing or
proxying done by the platform for them. How DNS is automatically
configured depends on whether the Service has selectors defined:
With selectors
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service.
Without selectors
For headless Services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS
system looks for and configures either:
CNAME records for ExternalName-type Services.
A records for any Endpoints that share a name with the Service, for all other types.
So, a Headless service is the same as default ClusterIP service, but without load balancing or proxying and therefore allowing you to connect to a Pod directly.
You can also reference below guides for further assistance:
Building a headless service in Kubernetes
Kubernetes Headless service vs ClusterIP and traffic distribution

How to terminate HTTPS traffic directly on Kubernetes container

I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container

tunnel or proxy from app in one kubernetes cluster (local/minikube) to a database inside a different kubernetes cluster (on Google Container Engine)

I have a large read-only elasticsearch database running in a kubernetes cluster on Google Container Engine, and am using minikube to run a local dev instance of my app.
Is there a way I can have my app connect to the cloud elasticsearch instance so that I don't have to create a local test database with a subset of the data?
The database contains sensitive information, so can't be visible outside it's own cluster or VPC.
My fall-back is to run kubectl port-forward inside the local pod:
kubectl --cluster=<gke-database-cluster-name> --token='<token from ~/.kube/config>' port-forward elasticsearch-pod 9200
but this seems suboptimal.
I'd use a ExternalName Service like
kind: Service
apiVersion: v1
metadata:
name: elastic-db
namespace: prod
spec:
type: ExternalName
externalName: your.elastic.endpoint.com
According to the docs
An ExternalName service is a special case of service that does not have selectors. It does not define any ports or endpoints. Rather, it serves as a way to return an alias to an external service residing outside the cluster.
If you need to expose the elastic database, there are two ways of exposing applications to outside the cluster:
Creating a Service of type LoadBalancer, that would load balance the traffic for all instances of your elastic database. Once the Load Balancer is created on GKE, just add the load balancer's DNS as the value for the elastic-db ExternalName created above.
Using an Ingress controller. The Ingress controller will have an IP that is reachable from outside the cluster. Use that IP as ExternalName for the elastic-db created above.

Communication between linked docker containers over http for api gateway

I'm currently working on a golang web app which is currently one application consisting of numerous packages and is deployed in an individual docker container. I have a redis instance and a mysql instance deployed and linked as separate containers. In order to get their addresses, I pull them from the environment variables set by docker. I would like to implement an api gateway pattern wherein I have one service which exposes the HTTP port (either 80 for http or 443 for https) called 'api' which proxies requests to other services. The other services ideally do not expose any ports publicly but rather are linked directly with the services they depend on.
So, api will be linked with all the services except for mysql and redis. Any service that need to validate a user's session information will be linked with the user service, etc. My question is, how can I make my http servers listen to http requests on the ports that docker links between my containers.
Simplest way to do this is Docker Compose. You can simply define which services you want and Docker Compose automatically link them in a dedicated network. Suppose you have your goapp, redis, and mysql instance and want to use nginx as your reverse proxy. Your docker-compose.yml file looks as follows:
services:
redis:
image: redis
mysql:
image: mysql
goapp:
image: myrepo/goapp
nginx:
image: nginx
volumes:
- /PATH/TO/MY/CONF/api.conf:/etc/nginx/conf.d/api.conf
ports:
- "443:443"
- "80:80"
The advantage is that you can reference any service from other services by its name. So from your goapp you can reach your MySQL server under hostname mysql and so on. The only exposed ports (i.e. reachable from the host machine) are 443 and 80 of nginx container.
You can start the whole system with docker-compose up!

Resources