apollo graphql federation server, load balancer health checks fail - graphql

I have a project which contains 6 graphql subgraphs and one gateway (express server) that federates those and connects to those. I have deployed this project on aws, each of those servers represents an ECS task and a load balancer attached to it. All of the graphql servers are up and running, but gateway is failing because of health checks.
I get the logs that server has started, but after that, load balancer performs health checks and the ecs task fails and starts over.
I don't know how to debug this because the health check configurations (path, status etc.) are correct and calling the health check path on local succeeds.
What am I missing and how can I set everything up? Thank you in advance.

Related

when consul node run on server mode, what endpoint /v1/agent/services will return?

I find my server node's endpoint >/v1/agent/services returns majority of services, but not all the services, anyone knows why ?
The visibility of services will depend on which API endpoint you're using.
Consul intends for services to be registered against a Consul client agent which is running on the same host as the deployed service (using the /v1/agent/service/register endpoint). The services registered with each agent in the data center are aggregated to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog).
The /v1/agent/services endpoint only returns services which have been registered against the specific agent with which you are communicating. In contrast, the /v1/catalog/services endpoint returns an aggregated list of all services which have been registered every agent across the data center. If you query this endpoint, you will receive a list of all services registered with Consul.

Consul difference between agent and catalog

I don’t understand the difference between consul’s agent api and catalog api
Although the consul document has always emphasized that agent and catalog should not be confused,But there are indeed many methods that look similar, such as:
/catalog/services
/agent/services
When should I use catalog or agent(Just like the above http url)?
Which one is suitable for high frequency calls?
Consul is designed for services to be registered against a Consul client agent which is running on the same host where a service is deployed. The /v1/agent/service/ endpoints provide a way for you to interact with services which are registered with the specific Consul agent to which you are communicating, and register new services against that agent.
Each Consul agent in the data center submits its registered service information to the Consul servers. The servers aggregate this information to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog). The /v1/catalog/ endpoints return that aggregated information.
I want to call out this sentence from the anti-entropy doc.
Consul treats the state of the agent as authoritative; if there are any differences between the agent and catalog view, the agent-local view will always be used.
The catalog APIs can be used to register or remove services/nodes from the catalog, but normally these operations should be performed against the client agents (using the /v1/agent/ APIs) since they are authoritative for data in Consul.
The /v1/agent/ APIs should be used for high frequency calls, and should be issued against the local Consul client agent running on the same node as the app, as opposed to communicating directly with the servers.

Consul Agent Service Registrations on other nodes are not fetchable from Rest API but is showing on UI

We have a consul cluster of 3 servers and registering agent services on any of them via Rest Api.
In UI, registrations of a server are visible on other servers as well. For e.g. registration on server A is visible on server B's UI (accessible by http://serverb:8500/).
However when hitting server B via Rest Api, it only shows its own registrations and do not show server A registration.
Server are started as
Server A
consul -server -ui bootstrap-expect=1 -node=ServerA -data-dir=D:\data -bind=11.223.15.78 -client=0.0.0.0 -retry-join=11.223.15.79 -retry-join=11.223.15.80
Server B
consul -server -ui bootstrap-expect=1 -node=ServerB -data-dir=D:\data -bind=11.223.15.79 -client=0.0.0.0 -retry-join=11.223.15.78 -retry-join=11.223.15.80
Server C
consul -server -ui bootstrap-expect=1 -node=ServerC -data-dir=D:\data -bind=11.223.15.80 -client=0.0.0.0 -retry-join=11.223.15.78 -retry-join=11.223.15.79
Is this an issue or am I doing something wrong?
The visibility of services will depend on which API endpoint you're using, and where you're registering your services. Consul intends for services to be registered against a Consul client agent which is running on the same host as the deployed service. The services registered with each agent in the data center are aggregated to form the service catalog (https://www.consul.io/docs/architecture/anti-entropy#catalog).
The /catalog/services endpoint returns an aggregated list of services registered with each agent across the data center. The /agent/services endpoint will only return services registered against the specific local agent with which you are communicating.
If you want clients to be able to register services across any server, you'll want to register them using the /catalog/register endpoint. You can optionally use a tool like Consul External Services Monitor to provide health checking for services, independently from the Consul servers. See https://www.hashicorp.com/blog/consul-and-external-services for more information.
If a service has been registered via the agent api only on one consul node of a cluster, you can still query the service by its service name by means of the catalog api from all server nodes:
/v1/catalog/:servicename
See https://www.consul.io/api-docs/catalog#list-nodes-for-service
Note that you need to deregister a service on the same consul node via the agent api where you have registered it via the agent api in the first place. If you just deregister it from the catalog, it will be back after a few minutes (that is at least my experience)
The Consul documentation recommends to use the agent api for registration, so I would still stick to registering via the agent api, although it makes deregistering a bit tricky.

How to deploy Envoy EDS/SDS

This is a micro services deployment question. How would you deploy Envoy SDS(service discovery service) so other envoy proxies can find the SDS server hosts, in order to discover other services to build the service mesh. Should I put it behind a load balancer with a DNS name( single point of failure) or just run the SDS locally in every machine so other micro services can access it? Or is there a better way of deployment that SDS cluster can be dynamically added into envoy config without a single point of failure?
Putting it behind a DNS name with a load balancer across multiple SDS servers is a good setup for reasonable availability. If SDS is down, Envoy will simple not get updated, so it's generally not the most critical failure -- new hosts and services simply won't get added to the cluster/endpoint model in Envoy.
If you want higher availability, you set up multiple clusters. If you add multiple entries to your bootstrap config, Envoy will fail over between them. You can either specify multiple DNS names or multiple IPs.
(My answer after misunderstanding the question below, for posterity)
You can start with a static config or DNS, but you'll probably want to
check out a full integration with your service discovery.
Check out Service Discovery
Integration
on LearnEnvoy.io.

consul: how many agents for services

I am playing a little with Docker and Consul and i have a couple of questions regarding agent-service mapping especially in docker environment. Assume i have a service name "myGreatService" being simple web nodejs helloworld application encapsulated with docker image named "myGreatServiceImage". From Consul docs i did understand that when you register a service (through HTTP or service definition file) than service is about to be "wired" to agent/consul node (the wired node can be retrieved via /v1/catalog/service/). So if a consul node is down (or node health check decided it is down) than all services "wired" to that consule node will automatically be marked as down. Am i right ?
If i run my GreatServiceImage image multiple times on a single host via docker (resulting of multiple instances of "myGreatService" service)
how many agents shall I run ?
A single per host managing all containers (all service instances) on that host? Or maybe a separate agent for each container (service instance) ?
If a health check for a service fails then the service will be marked as down and won't show up if you do a DNS query for that service
dig #localhost -p 8500 apache.service.consul
If you do a call to the api you will see that the service is still listed. This is because the service is not removed, it is just marked as down. If you would do an api call to check the health of that service it would be shown as down.
curl localhost/v1/catalog/service/apache
curl localhost/v1/health/service/apache
You can add the ?passing flag to that last call to recieve only the healthy services. (just like the dns query)
curl localhost/v1/health/service/apache?passing
If the consul agent on the host fails then all services running on that host won't show up if you query consul for the services. (either via a dns query or via the api).
As for the number of agents you should be running: Run one consul agent per host. Let your services register themselves via the api of your local consul agent. (or preconfigure all your services in the config files, but I recommend you to make this a dynamic process of self registering)

Resources