external and internal loadbalancer with traefik - consul

I am trying to set up traefik with an internal load balancer (that can contact the apps) and an external loadbalancer (that all the ingress goes through but forwards all trafik to the internal trafik server). My services themselves are registered in consul.
The closest I get to a solution is to use consul-template to build a file based configuration for the external LB and a consul-catalog based config for the internal one.
I would like to have it so that the external LB is having a backend defined for the internal LB that comes from consulCatalog, but I manually (or consul-template based) create the frontend for each domain.
How could I achieve that? I am fairly new to traefik...

Related

Services communication in consul

I am developing several services, and use consul as the service registry. I'm able to register all of my services to the consul.
And now for the next thing to do, I need to be able to communicate from service A to service B.
Without a service registry, usually what I did was simply dispatch a client HTTP request from service A to service B.
But since now I already have service discovery in place, should I get the service B host address via consul and then dispatch a client HTTP request to the service B host address something like that? Or does the consul also provide an API gateway, so I only need to dispatch my client HTTP request from service A to the consul, and then the consul will automatically forward it to the destination?
Also if there is relevant documentation about my case, I would be very glad to take a look at it? (I can't find the relevant documentation, probably my google search keyword is wrong)
Consul supports two methods for service discovery, DNS and HTTP.
Applications can perform DNS lookups against their local Consul agent which exposes a DNS server on port 8600 (you can also configure DNS forwarding). For example, an application can issue an A record query for web.service.consul and Consul will return a list of healthy instance endpoints for the web service. SRV lookups are also supported in order to retrieve the IP and port for a given service. The DNS interface also supports querying endpoints by service tag and data center. Details can be found at Consul.io: DNS - Service Lookups.
HTTP-based service discovery can be performed by querying the /v1/health/service/:name endpoint against the local agent. The following will return a full list of healthy and unhealthy endpoints for the service nginx.
$ curl http://127.0.0.1:8500/v1/health/service/nginx
You can use the passing query parameter to restrict the output to only healthy services.
$ curl "http://127.0.0.1:8500/v1/health/service/nginx?passing"
I recommend reviewing the guide Register a Service with Consul Service Discovery for more info on registering and querying services from the catalog.
Lastly, API gateways like Traefik and Solo's Gloo support using Consul for service discovery (see Traefik's Consul Catalog Provider and Gloo's Consul Services). You could configure your services to route requests to these gateways, and allow the gateway to forward to the backend destination.
I ended up getting the list of services info from the consul, and then perform name matching on it then get the service address.
I use this endpoint to get the list of the services and it's data:
http://localhost:8500/v1/agent/services
So it's the client-side discovery I guess.

Endpoint target type 'DomainName' is not allowed for this profile

I am trying to create a new traffic manager profile of either Performance or Weight configuration but I keep getting stuck when trying to add an Azure Endpoint.
I have a two public IP inside of Azure, one with an optional DNS name, one with out.
When I try to add either of these as an endpoint, I get the following error message:
The one with a dns name on it:
Failed to save configuration changes to Traffic Manager profile 'profilename'. Error: Endpoint target type, 'DomainName', is not allowed for this profile. Valid values are: IPv4Address.
The one without a dns name:
No DNS name is configured.
If i choose External Endpoint and add the IPv4 directly it will work.
I tried with several different Traffice Manager profiles.. Is there a secret that I am missing out on? I am stuck..
Usually, There are three types of endpoint supported by Traffic Manager:
Azure endpoints are used for services hosted in Azure.
External endpoints are used for IPv4/IPv6 addresses, FQDNs, or for services hosted outside Azure that can either be on-premises or
with a different hosting provider.
Nested endpoints are used to combine Traffic Manager profiles to create more flexible traffic-routing schemes to support the needs
of larger, more complex deployments.
...
Azure endpoints are used for Azure-based services in Traffic Manager.
The following Azure resource types are supported:
PaaS cloud services. Web Apps Web App Slots PublicIPAddress resources
(which can be connected to VMs either directly or via an Azure Load
Balancer). The publicIpAddress must have a DNS name assigned to be
used in a Traffic Manager profile.
In this case, when you add a public IP address in the same subscription as an Azure endpoint, it will grey out if no DNS name configured in the Azure portal. You could add it when the public IP address configured with Azure provided DNS name like somedns.westus2.cloudapp.azure.com, this works on my side.
For example, there is a public IP address with the DNS name used for an Azure load balancer frontend.

Automatic Failover between Azure Internal Load Balancers

We are moving a workflow of our business to Azure. I currently have two VMs as an HA pair behind an internal load balancer in the North Central US Region as my production environment. I have mirrored this architecture in the South Central US Region for disaster recovery purposes. A vendor recommended I place an Azure Traffic Manager in front of the ILBs for automatic failover, but it appears that I cannot spec ILBs as endpoints for ATM. (For clarity, all connections to these ILBs are through VPNs.)
Our current plan is to put the IPs for both ILBs in a custom-built appliance placed on-prem, and the failover would happen on that appliance. However, it would greatly simplify things if we could present a single IP to that appliance, and let the failover happen in Azure instead.
Is there an Azure product or service, or perhaps more appropriate architecture that would allow for a single IP to be presented to the customer, but allow for automatic failover across regions?
It seems that you could configure an application gateway with an internal load balancer (ILB) endpoint. In this case, you will have a private frontend IP configuration for an Application Gateway. The APPGW will be deployed in a dedicated subnet, it will exist on the same VNet with your internal backend VMs. Please note in this case you can directly add the private VMs as the backends instead of internal load balancer frontend IP address because of private APPGW itself is an internal load balancer.
Moreover, APPGW also could configure a public frontend IP configuration, if so, you can configure the APPGW public frontend IP as the endpoints of the Azure traffic manager.
Hope this could help you.

How to terminate HTTPS traffic directly on Kubernetes container

I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container

HTTPS Load Balancer to expose a Workload on Google Kubernetes

I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.

Resources