How to terminate HTTPS traffic directly on Kubernetes container - https

I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).

I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container

Related

How to send request from a service running on EKS cluster as a client to a differnt service running outside the network

Senario: I have two services lets say A and B.
At present service A is running on ec2 instances with an elb infront of it and makes call to service B. At service B side, we have whitelisted IP of service A to accept request only from the Whitelisted IP's.
Now we migrated service A from ec2 instances to EKS, I am a bit new to EKS concept, So I would like to know how we can how we can allow service A to send request to service B.
You can setup NAT with Elastic IP address and route your cluster egress thru this NAT. You can then whitelist the NAT public EIP which doesn't change.

Redirect requests to particular replica in kubernetes

I am new to Kubernetes.
If there is any service deployed using EKS having 4 replicas A,B,C,D.
Usually loadbalancer directs requests to these replicas
But if I want that my request should go to replica A only or B only...
How can we achieve it.
Request to share some links or steps for guidance
What you could use are the Headless Services:
Sometimes you don't need load-balancing and a single Service IP. In
this case, you can create what are termed "headless" Services, by
explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service
discovery mechanisms, without being tied to Kubernetes'
implementation.
For headless Services, a cluster IP is not allocated, kube-proxy
does not handle these Services, and there is no load balancing or
proxying done by the platform for them. How DNS is automatically
configured depends on whether the Service has selectors defined:
With selectors
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service.
Without selectors
For headless Services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS
system looks for and configures either:
CNAME records for ExternalName-type Services.
A records for any Endpoints that share a name with the Service, for all other types.
So, a Headless service is the same as default ClusterIP service, but without load balancing or proxying and therefore allowing you to connect to a Pod directly.
You can also reference below guides for further assistance:
Building a headless service in Kubernetes
Kubernetes Headless service vs ClusterIP and traffic distribution

Services communication in consul

I am developing several services, and use consul as the service registry. I'm able to register all of my services to the consul.
And now for the next thing to do, I need to be able to communicate from service A to service B.
Without a service registry, usually what I did was simply dispatch a client HTTP request from service A to service B.
But since now I already have service discovery in place, should I get the service B host address via consul and then dispatch a client HTTP request to the service B host address something like that? Or does the consul also provide an API gateway, so I only need to dispatch my client HTTP request from service A to the consul, and then the consul will automatically forward it to the destination?
Also if there is relevant documentation about my case, I would be very glad to take a look at it? (I can't find the relevant documentation, probably my google search keyword is wrong)
Consul supports two methods for service discovery, DNS and HTTP.
Applications can perform DNS lookups against their local Consul agent which exposes a DNS server on port 8600 (you can also configure DNS forwarding). For example, an application can issue an A record query for web.service.consul and Consul will return a list of healthy instance endpoints for the web service. SRV lookups are also supported in order to retrieve the IP and port for a given service. The DNS interface also supports querying endpoints by service tag and data center. Details can be found at Consul.io: DNS - Service Lookups.
HTTP-based service discovery can be performed by querying the /v1/health/service/:name endpoint against the local agent. The following will return a full list of healthy and unhealthy endpoints for the service nginx.
$ curl http://127.0.0.1:8500/v1/health/service/nginx
You can use the passing query parameter to restrict the output to only healthy services.
$ curl "http://127.0.0.1:8500/v1/health/service/nginx?passing"
I recommend reviewing the guide Register a Service with Consul Service Discovery for more info on registering and querying services from the catalog.
Lastly, API gateways like Traefik and Solo's Gloo support using Consul for service discovery (see Traefik's Consul Catalog Provider and Gloo's Consul Services). You could configure your services to route requests to these gateways, and allow the gateway to forward to the backend destination.
I ended up getting the list of services info from the consul, and then perform name matching on it then get the service address.
I use this endpoint to get the list of the services and it's data:
http://localhost:8500/v1/agent/services
So it's the client-side discovery I guess.

Curl into API cluster secured by open VPN from another cluster's pod's container

I have created 2 kubernetes clusters on AWS within a VPC.
1) Cluster dedicated to micro services (MI)
2) Cluster dedicated to Consul/Vault (Vault)
So basically both of those clusters can be reached through distinct classic public load balancers which expose k8s APIs
MI: https://api.k8s.domain.com
Vault: https://api.vault.domain.com
I also set up openvpn on both clusters, so you need to be logged in vpn to "curl" or "kubectl" into the clusters.
To do that I just added a new rule in the ELBs's security groups with the VPN's IP on port 443:
HTTPS 443 VPN's IP/32
At this point all works correctly, which means I'm able to successfully "kubectl" in both clusters.
Next thing I need to do, is to be able to do a curl from Vault's cluster within pod's container within into the MI cluster. Basically:
Vault Cluster --------> curl https://api.k8s.domain.com --header "Authorization: Bearer $TOKEN"--------> MI cluster
The problem is that at the moment clusters only allow traffic from VPN's IP.
To solve that, I've added new rules in the security group of MI cluster's load balancer.
Those new rules allow traffic from each vault's node private and master instances IPs.
But for some reason it does not work!
Please note that before adding restrictions in the ELB's security group I've made sure the communication works with both clusters allowing all traffic (0.0.0.0/0)
So the question is when I execute a command curl in pod's container into another cluster api within the same VPC, what is the IP of the container to add to the security group ?
NAT gateway's EIP for the Vault VPC had to be added to the ELB's security group to allow traffic.

HTTPS Load Balancer to expose a Workload on Google Kubernetes

I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.

Resources