This is a micro services deployment question. How would you deploy Envoy SDS(service discovery service) so other envoy proxies can find the SDS server hosts, in order to discover other services to build the service mesh. Should I put it behind a load balancer with a DNS name( single point of failure) or just run the SDS locally in every machine so other micro services can access it? Or is there a better way of deployment that SDS cluster can be dynamically added into envoy config without a single point of failure?
Putting it behind a DNS name with a load balancer across multiple SDS servers is a good setup for reasonable availability. If SDS is down, Envoy will simple not get updated, so it's generally not the most critical failure -- new hosts and services simply won't get added to the cluster/endpoint model in Envoy.
If you want higher availability, you set up multiple clusters. If you add multiple entries to your bootstrap config, Envoy will fail over between them. You can either specify multiple DNS names or multiple IPs.
(My answer after misunderstanding the question below, for posterity)
You can start with a static config or DNS, but you'll probably want to
check out a full integration with your service discovery.
Check out Service Discovery
Integration
on LearnEnvoy.io.
Related
We are moving a workflow of our business to Azure. I currently have two VMs as an HA pair behind an internal load balancer in the North Central US Region as my production environment. I have mirrored this architecture in the South Central US Region for disaster recovery purposes. A vendor recommended I place an Azure Traffic Manager in front of the ILBs for automatic failover, but it appears that I cannot spec ILBs as endpoints for ATM. (For clarity, all connections to these ILBs are through VPNs.)
Our current plan is to put the IPs for both ILBs in a custom-built appliance placed on-prem, and the failover would happen on that appliance. However, it would greatly simplify things if we could present a single IP to that appliance, and let the failover happen in Azure instead.
Is there an Azure product or service, or perhaps more appropriate architecture that would allow for a single IP to be presented to the customer, but allow for automatic failover across regions?
It seems that you could configure an application gateway with an internal load balancer (ILB) endpoint. In this case, you will have a private frontend IP configuration for an Application Gateway. The APPGW will be deployed in a dedicated subnet, it will exist on the same VNet with your internal backend VMs. Please note in this case you can directly add the private VMs as the backends instead of internal load balancer frontend IP address because of private APPGW itself is an internal load balancer.
Moreover, APPGW also could configure a public frontend IP configuration, if so, you can configure the APPGW public frontend IP as the endpoints of the Azure traffic manager.
Hope this could help you.
I am migrating my spring cloud eureka application to AWS ECS and currently having some trouble doing so.
I have an ECS cluster on AWS in which two EC2 services was created
Eureka-server
Eureka-client
each service has a Task running on it.
QUESTION:
how do i establish a "docker network" amongst these two services such that i can register my eureka-client to the eureka-server's registry? Having them in the same cluster doesn't seem to do the trick.
locally i am able to establish a "docker network" to achieve this task. is it possible to have a "docker network" on AWS?
The problem here lies on the way how ECS clusters work. If you go to your dashboard and check out your task definition, you'll see an ip address which AWS assigns to the resource automatically.
In Eureka's case, you need to somehow obtain this ip address while deploying your eureka client apps and use it to register to your eureka-server. But of course your task definitions gets destroyed and recreated again somehow so you easily lose it.
I've done this before and there are couple of ways to achieve this. Here is one of the ways:
For the EC2 instances that you intend to spread ECS tasks as eureka-server or registry, you need to assign Elastic IP Addresses so you always know where to connect to in terms of a host ip address.
You also need to tag them properly so you can refer them in the next step.
Then switching back to ECS, when deploying your eureka-server tasks, inside your task definition configuration, there's an argument as placement_constraint
This will allow you to add a tag to your tasks so you can place those in the instances you assigned elastic ip addresses in the previous steps.
Now if this is all good and you deployed everything, you should be able to refer your eureka-client apps to that ip and have them registered.
I know this looks dirty and kind of complicated but the thing is Netflix OSS project for Eureka has missing parts which I believe is their proprietary implementation for their internal use and they don't want to share.
Another and probably a cooler way of doing this is using a Route53 domain or alias record for your instances so instead of using an elastic ip, you can also refer them using a DNS.
I'm interested in knowing if I can use Consul to solve the following issues:
1) Can Consul be used to load balance microservices? For instance, if I put console on the server that hosts my API gateway, can it be used to monitor all microservices it has discovered and load balance if I have two of the same microservice?
2) Can Consul be used at the microservice level to spin up instances as needed? Essentially, I'd like to not use IIS and find an alternative.
3) If for whatever reason Consul monitors a microservice as offline, can it attempt to start it up again? Or force a shut down of a microservice for whatever reason?
If Consul software can't solve these issues, is there other alternatives?
Thank you.
Consul DNS can provide a simple way for you to load balance services. It's especially powerful if you combine it with Consul Prepared Queries and health checks.
Consul is best suited for monitoring services (via health checks) but you can use consul watch to trigger events if a service suddenly becomes unavailable.
Hashicorp (the company behind Consul) offers another tool called Nomad.
Unlike Consul, Nomad is designed to run services (called jobs) and restart them if necessary.
Nomad works best if you tell it where to find Consul. This enables automatic service registration for any task Nomad launches, including deregistering it if you instruct Nomad to stop running that task. Health checks are supported as well.
Say I deploy an API, the database etc. to a t2.micro EC2 instance to serve traffic for the period of prototyping and beta testing. Let's say the domain pointing to the API is api.exampleapp.com.
Now traffic begins to grow beyond the instance's limits and we deploy the API to a bunch of instances that we want to stand behind a load balancer. After setting the fleet up, how do we make api.exampleapp.com point now to the load balancer's IP address so that traffic is served by the newly launched instances without any downtime? Is this possible at all? Or with minimal downtime? Or is this approach of starting up with a new API itself faulty?
I assume you either don't need auto-scaling or have it already configured.
start the LB and attach your first EC2 to it. The instance still work, can be directly accessible via its IP (thus, accessible from the World).
check the LB hostname, try to access the instance using LB, make sure it works
switch DNS to the LB using either CNAME or ALIAS record type (if ALIAS is supported by your DNS server)
add another instances to the LB.
Done!
I was looking at Mesos + Marathon to manage Docker containers.
What we're trying to achieve is a way of getting an external DNS entry (test.example.com) to point to a specific set of docker containers.
The DNS entry for test.example.com points to a load balancer which translate and send the connection to one of our backend servers app.
To do this I looked into Mesos-dns. With mesos-dns I can get DNS name for each container and can resolve DNS with container IP but couldn't find out way to load balance between set of servers.
Can someone confirm if Mesos-dns provides load balancing? If yes, how can I achieve load balancing with it?
Do I need to use some other solutions like HAProxy or Bamboo to achieve this?
Thanks!!
Sumit
Yes with Mesos-DNS you can do load balancing, see for example the respective HTTP API endpoints, but it's really not recommended in the context of DC/OS: see the internal (Minuteman) and external (Marathon-lb, HAProxy-based) load balancing and service discover options in the docs.