For Eureka, services can register themselves to Eureka server directly. Why should we send request to Consul client instead of Consul server? Are there any problems to let services communicate with Consul server directly?
Appreciate your help, thanks!
No, there is no problem in communicating directly to the servers.
Consul clients are used in big data centers with many (5+) Consul agents. The Consul developers recommend to use three to five server agents per datacenter. If you need more agents (for hundreds of micro services e.g.) than you should use client agents that are connected to server agents instead of launching more server agents which will decrease performance.
But in a smaller datacenter there is no problem using server agents directly.
Related
im just getting started learning nomad and consul
i have several servers without a Local Area Network and they are connected through wam (which i think you mean by datacenters) every server is a datacenter
i found in the docs https://www.consul.io/docs/architecture that each datacenter should have 3 to 5 consul servers so is my case applicable with consul and nomad
should i make all of the consul servers or 3 servers as consul and the rest are consul clients
You could use:
3*consul servers or
2*consul clients and one server
I have every tested all these cases they work correctly
I don’t understand the difference between consul’s agent api and catalog api
Although the consul document has always emphasized that agent and catalog should not be confused,But there are indeed many methods that look similar, such as:
/catalog/services
/agent/services
When should I use catalog or agent(Just like the above http url)?
Which one is suitable for high frequency calls?
Consul is designed for services to be registered against a Consul client agent which is running on the same host where a service is deployed. The /v1/agent/service/ endpoints provide a way for you to interact with services which are registered with the specific Consul agent to which you are communicating, and register new services against that agent.
Each Consul agent in the data center submits its registered service information to the Consul servers. The servers aggregate this information to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog). The /v1/catalog/ endpoints return that aggregated information.
I want to call out this sentence from the anti-entropy doc.
Consul treats the state of the agent as authoritative; if there are any differences between the agent and catalog view, the agent-local view will always be used.
The catalog APIs can be used to register or remove services/nodes from the catalog, but normally these operations should be performed against the client agents (using the /v1/agent/ APIs) since they are authoritative for data in Consul.
The /v1/agent/ APIs should be used for high frequency calls, and should be issued against the local Consul client agent running on the same node as the app, as opposed to communicating directly with the servers.
This is a micro services deployment question. How would you deploy Envoy SDS(service discovery service) so other envoy proxies can find the SDS server hosts, in order to discover other services to build the service mesh. Should I put it behind a load balancer with a DNS name( single point of failure) or just run the SDS locally in every machine so other micro services can access it? Or is there a better way of deployment that SDS cluster can be dynamically added into envoy config without a single point of failure?
Putting it behind a DNS name with a load balancer across multiple SDS servers is a good setup for reasonable availability. If SDS is down, Envoy will simple not get updated, so it's generally not the most critical failure -- new hosts and services simply won't get added to the cluster/endpoint model in Envoy.
If you want higher availability, you set up multiple clusters. If you add multiple entries to your bootstrap config, Envoy will fail over between them. You can either specify multiple DNS names or multiple IPs.
(My answer after misunderstanding the question below, for posterity)
You can start with a static config or DNS, but you'll probably want to
check out a full integration with your service discovery.
Check out Service Discovery
Integration
on LearnEnvoy.io.
I'm interested in knowing if I can use Consul to solve the following issues:
1) Can Consul be used to load balance microservices? For instance, if I put console on the server that hosts my API gateway, can it be used to monitor all microservices it has discovered and load balance if I have two of the same microservice?
2) Can Consul be used at the microservice level to spin up instances as needed? Essentially, I'd like to not use IIS and find an alternative.
3) If for whatever reason Consul monitors a microservice as offline, can it attempt to start it up again? Or force a shut down of a microservice for whatever reason?
If Consul software can't solve these issues, is there other alternatives?
Thank you.
Consul DNS can provide a simple way for you to load balance services. It's especially powerful if you combine it with Consul Prepared Queries and health checks.
Consul is best suited for monitoring services (via health checks) but you can use consul watch to trigger events if a service suddenly becomes unavailable.
Hashicorp (the company behind Consul) offers another tool called Nomad.
Unlike Consul, Nomad is designed to run services (called jobs) and restart them if necessary.
Nomad works best if you tell it where to find Consul. This enables automatic service registration for any task Nomad launches, including deregistering it if you instruct Nomad to stop running that task. Health checks are supported as well.
In my microservices system I plan to use docker swarm and Consul.
In order to ensure the high availability of Consul I’m going to build a cluster of 3 server agents (along with a client agent per node), but this doesn’t save me from local consul agent failure.
Am I missing something?
If not, how can I configure swarm to be aware of more than 1 consul agents?
Consul is the only service discovery backend that don't support multiple endpoints while using swarm.
Both zookeeper and etcd support the etcd://10.0.0.4,10.0.0.5 format of providing multiple Ip's for the "cluster" of discovery back-ends while using Swarm.
To answer your question how you can configure Swarm to support more than 1 consul (server) - I don't have a definitive answer to it but can point you in a direction and something you can test ( no guarantees ) :
One suggestion worth testing (which is not recommended for production) is to use a Load Balancer that can pass your requests from the Swarm manager to one of the three consul servers.
So when starting the swarm managers you can point to consul://ip_of_loadbalancer:port
This will however cause the LB to be a bottleneck (if it goes down).
I have not tested the above and can't answer if it will work or not - it is merely a suggestion.