I am new to Kubernetes.
If there is any service deployed using EKS having 4 replicas A,B,C,D.
Usually loadbalancer directs requests to these replicas
But if I want that my request should go to replica A only or B only...
How can we achieve it.
Request to share some links or steps for guidance
What you could use are the Headless Services:
Sometimes you don't need load-balancing and a single Service IP. In
this case, you can create what are termed "headless" Services, by
explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service
discovery mechanisms, without being tied to Kubernetes'
implementation.
For headless Services, a cluster IP is not allocated, kube-proxy
does not handle these Services, and there is no load balancing or
proxying done by the platform for them. How DNS is automatically
configured depends on whether the Service has selectors defined:
With selectors
For headless Services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return records (addresses) that point directly to the
Pods backing the Service.
Without selectors
For headless Services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS
system looks for and configures either:
CNAME records for ExternalName-type Services.
A records for any Endpoints that share a name with the Service, for all other types.
So, a Headless service is the same as default ClusterIP service, but without load balancing or proxying and therefore allowing you to connect to a Pod directly.
You can also reference below guides for further assistance:
Building a headless service in Kubernetes
Kubernetes Headless service vs ClusterIP and traffic distribution
Related
I don’t understand the difference between consul’s agent api and catalog api
Although the consul document has always emphasized that agent and catalog should not be confused,But there are indeed many methods that look similar, such as:
/catalog/services
/agent/services
When should I use catalog or agent(Just like the above http url)?
Which one is suitable for high frequency calls?
Consul is designed for services to be registered against a Consul client agent which is running on the same host where a service is deployed. The /v1/agent/service/ endpoints provide a way for you to interact with services which are registered with the specific Consul agent to which you are communicating, and register new services against that agent.
Each Consul agent in the data center submits its registered service information to the Consul servers. The servers aggregate this information to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog). The /v1/catalog/ endpoints return that aggregated information.
I want to call out this sentence from the anti-entropy doc.
Consul treats the state of the agent as authoritative; if there are any differences between the agent and catalog view, the agent-local view will always be used.
The catalog APIs can be used to register or remove services/nodes from the catalog, but normally these operations should be performed against the client agents (using the /v1/agent/ APIs) since they are authoritative for data in Consul.
The /v1/agent/ APIs should be used for high frequency calls, and should be issued against the local Consul client agent running on the same node as the app, as opposed to communicating directly with the servers.
I am developing several services, and use consul as the service registry. I'm able to register all of my services to the consul.
And now for the next thing to do, I need to be able to communicate from service A to service B.
Without a service registry, usually what I did was simply dispatch a client HTTP request from service A to service B.
But since now I already have service discovery in place, should I get the service B host address via consul and then dispatch a client HTTP request to the service B host address something like that? Or does the consul also provide an API gateway, so I only need to dispatch my client HTTP request from service A to the consul, and then the consul will automatically forward it to the destination?
Also if there is relevant documentation about my case, I would be very glad to take a look at it? (I can't find the relevant documentation, probably my google search keyword is wrong)
Consul supports two methods for service discovery, DNS and HTTP.
Applications can perform DNS lookups against their local Consul agent which exposes a DNS server on port 8600 (you can also configure DNS forwarding). For example, an application can issue an A record query for web.service.consul and Consul will return a list of healthy instance endpoints for the web service. SRV lookups are also supported in order to retrieve the IP and port for a given service. The DNS interface also supports querying endpoints by service tag and data center. Details can be found at Consul.io: DNS - Service Lookups.
HTTP-based service discovery can be performed by querying the /v1/health/service/:name endpoint against the local agent. The following will return a full list of healthy and unhealthy endpoints for the service nginx.
$ curl http://127.0.0.1:8500/v1/health/service/nginx
You can use the passing query parameter to restrict the output to only healthy services.
$ curl "http://127.0.0.1:8500/v1/health/service/nginx?passing"
I recommend reviewing the guide Register a Service with Consul Service Discovery for more info on registering and querying services from the catalog.
Lastly, API gateways like Traefik and Solo's Gloo support using Consul for service discovery (see Traefik's Consul Catalog Provider and Gloo's Consul Services). You could configure your services to route requests to these gateways, and allow the gateway to forward to the backend destination.
I ended up getting the list of services info from the consul, and then perform name matching on it then get the service address.
I use this endpoint to get the list of the services and it's data:
http://localhost:8500/v1/agent/services
So it's the client-side discovery I guess.
We are moving a workflow of our business to Azure. I currently have two VMs as an HA pair behind an internal load balancer in the North Central US Region as my production environment. I have mirrored this architecture in the South Central US Region for disaster recovery purposes. A vendor recommended I place an Azure Traffic Manager in front of the ILBs for automatic failover, but it appears that I cannot spec ILBs as endpoints for ATM. (For clarity, all connections to these ILBs are through VPNs.)
Our current plan is to put the IPs for both ILBs in a custom-built appliance placed on-prem, and the failover would happen on that appliance. However, it would greatly simplify things if we could present a single IP to that appliance, and let the failover happen in Azure instead.
Is there an Azure product or service, or perhaps more appropriate architecture that would allow for a single IP to be presented to the customer, but allow for automatic failover across regions?
It seems that you could configure an application gateway with an internal load balancer (ILB) endpoint. In this case, you will have a private frontend IP configuration for an Application Gateway. The APPGW will be deployed in a dedicated subnet, it will exist on the same VNet with your internal backend VMs. Please note in this case you can directly add the private VMs as the backends instead of internal load balancer frontend IP address because of private APPGW itself is an internal load balancer.
Moreover, APPGW also could configure a public frontend IP configuration, if so, you can configure the APPGW public frontend IP as the endpoints of the Azure traffic manager.
Hope this could help you.
I have so far configured servers inside Kubernetes containers that used HTTP or terminated HTTPS at the ingress controller. Is it possible to terminate HTTPS (or more generally TLS) traffic from outside the cluster directly on the container, and how would the configuration look in that case?
This is for an on-premises Kubernetes cluster that was set up with kubeadm (with Flannel as CNI plugin). If the Kubernetes Service would be configured with externalIPs 1.2.3.4 (where my-service.my-domain resolves to 1.2.3.4) for service access from outside the cluster at https://my-service.my-domain, say, how could the web service running inside the container bind to address 1.2.3.4 and how could the client verify a server certificate for 1.2.3.4 when the container's IP address is (FWIK) some local IP address instead? I currently don't see how this could be accomplished.
UPDATE My current understanding is that when using an Ingress HTTPS traffic would be terminated at the ingress controller (i.e. at the "edge" of the cluster) and further communication inside the cluster towards the backing container would be unencrypted. What I want is encrypted communication all the way to the container (both outside and inside the cluster).
I guess, Istio envoy proxies is what you need, with the main purpose to authenticate, authorize and encrypt service-to-service communication.
So, you need a mesh with mTLS authentication, also known as service-to-service authentication.
Visually, Service A is your Ingress service and Service B is a service for HTTP container
So, you terminate external TLS traffic on the ingress controller and it will go further inside the cluster with Istio mTLS encryption.
It's not exactly what you asked for -
terminate HTTPS traffic directly on Kubernetes container
Though it fulfill the requirement-
What I want is encrypted communication all the way to the container
We're planning to develop some microservices based on the play framework. They will provide rest apis and lots of them will be using akka cluster/cluster-sharding under the hood.
We would like to have an api gateway that exposes the apis of our internal services, but we're facing one big issue:
- Multiple instances of each service will be running under some ip and port.
- How will the api gateway know where the services instances are running?
- Is there maybe something load-balancer-like for play that keeps track of all running services?
Which solution(s) could possibly fill the spot for the "API Gateway"/"Load Balancer"?
The question you're asking is not really related to play framework. And there is no single answer that would solve what you need.
You could start by reading akka Service Discovery and then make your choice based what fits you more.
We're building services with akka-http and use akka-cluster but use unrelated technologies to expose and run the services.
Check out
Kong for API Gateway
Consul for DNS based service discovery
docker swarm for running containers with mesh network for load balancing
You are looking for following components,
Service Registry : The whole point of this component is to keep track of "what service are running on what addresses". This can be as simple as a simple database which keeps entries for all the running services and their instances. Generally the orchestration service is responsible to register new service instances with Service Registry. Other choice can be to have instances themselves notify the service registry about their existence.
Service Health Checker : This component is mostly responsible for doing periodic runtime checks on the registered service instances and tell service registry if any of them is not working. The service registry implementation can then either mark these instances as "inactive" till they are found to be working by Service Health Checker in future (if ever).
Service Resolution : This is the conceptual component responsible for enabling a client to somehow get to the running service instances.
The whole of above components is called Service Discovery.
In your case, you have load-balancers which can act as a form of ServiceDiscovery.
I don't think load-balancers are going to change much over time unless you require a very advanced architecture, so your API gateway can simply "know" the url's to load-balancers for all your services. So, you don't really need service registry layer.
Now, your load-balancers inherently provide a health-check and quarantine mechanism for instances. So, you don't need an extra health check layer.
So, the only piece missing is to register your instances with the load balancer. This part you will have to figure out based on what your load-balancers are and what ecosystem they live in.
If you live in AWS ecosystem and your load balancers are ELB, then you should have things sorted out in that respect.
Based on Ivan's and Sarvesh's answers we did some research and discovered the netflix OSS projects.
Eureka can be used as service locator that integrates well with the Zuul api gateway. Sadly there's not much documentation on the configuration, so we looked further...
We've now finally choosen Kubernetes as Orchestator.
Kubernetes knows about all running containers, so there's no need for an external service locator like Eureka.
Traefik is an api gateway that utilizes the kuberentes api to discover all running microservices instances and does load balancing
Akka management finds all nodes via the kubernetes api and does the bootstrapping of the cluster for us.