Vitess: pass parameter to consul - consul

I would like to connect Vitess (for example, vtcld) to Consul (as Topo server) but I need to pass dc parameter. I thought it should be done in consul_auth_static_file but I guess it is only being used for authentication purpose.
So, the questions is: How can I do this? How to pass dc parameter to topology server?

Related

How to pass traceId generated in a service to another with Nestjs TCP call

I have a service(A) act as the entry point which receives HTTP request, it is able to retrieve/create a traceID for each single call, and call another service (B) via TCP (message pattern).
For logging purpose, how to pass this traceID pass from A to B? I am not expecting to use data besides the cmd, it does not look too neat. Is there any technique could achieve this?
Resolved this by sending the traceId as an extra field in the data. Service B parse the tranceId then.

How does consul KV consistency works in regards to updating the same key

If given a consul KV key a/key, where there are multiple agent server instances running, what happens if:
Two requests A (set value to val-a) and B (set value to val-b) are made to the create key endpoint without making use of the parameters cas or acquire in order to update the same key a/key:
If A and B are made in parallel can the key's value become corrupted?
Or if A comes slightly before B can the final value still become val-a?
The data will not be corrupted if Consul receives two write requests at the same time. The write requests would be processed serially by the leader, so the value of a/key would become either val-a or val-b, whichever is processed last.
You can find details on how Consul writes data in Consul's Consensus Protocol documentation.

ELB Balancing Stateful Servers

Let's say i have this HTTP2 service, that has a list of users and this user hair color, in memory and database well.
Now i want to scale this up into multiple nodes - however i do not want the same user to be in two different servers memory - each server shall handle those specific users. This means i need to inform the load balancer where each user is being handled. In case of de-scaling, i need to inform this user is nowhere and can be routed to any server or by a given rule - IE server with less memory being used.
Would any1 know if ALB load balancer supports that ? One path i was thinking of using Query string parameter-based routing, so i could inform in the request itself something like destination_node = (int)user_id % 4 in case i had 4 nodes for instance - and this worked well in a proof of concept but that leads to a few issues:
The service itself would need to know how many instances there are to balance.
I could not guarantee even balancing, its basically a luck based balancing.
What would be the preferred approach for this, or what is a common way of solving this problem ? Does AWS ELB supports this out of the box ? I was trying to avoid having to write my own balancer, a middleware that keeps track of what services are handling what users, whose responsibility would be distributing the requests among those servers.
In AWS Application Load Balancer (ALB) it is possible to write Routing-Rules on
Host Header
HTTP Header
HTTP Request Method
Path Pattern
Query String
Source IP
But at the moment there is no way to route under dynamic conditions.
If it possible to group your data, i would prefere path pattern like
/users/blond/123

Hystrix command key decision, Service Name+Instance IP+Api Name?

I want to implement Hystrix in gateway(like zuul).
The gateway will discover service A, B or C, assume the service A has 10 instances and 10 Api. My question is.
What is the best practice for the command key decision? Service Name+Instance IP+Api Name.
it seems this gain the best detail level, as the different api, different instance fail will not circle break the other, But it may occupy large volume of command key.
Here is the example. Suppose I talk to service A, there are 5 instances of service A, I talk to service A with a load balancer, and the ip as below
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
and service A has 4 api, like
createOrder
deleteOrder
updateOrder
getOrder
Now there are many options for the command key choosen.
serivce level, like serviceA
instance level, like 192.168.1.1
instance + api level like 192.168.1.1_getOrder
for the first option, there are only one hystrix command, it take less cpu or memory, but if one api fail, all api are circle breaks.
Your HystrixCommandKey identifies a HystrixCommand, which encapsulates aService.anOperation(). Thus a HystrixCommandKey could be named using the composite key Service+Command (but not instances running the service or the IP addresses). If you do not provide an explicit name, the class name of HystrixCommand is used as the default HystrixCommandKey.
The Hystrix Dashboard then aggregates metrics for each of the HystrixCommandKey (Service+Command), from each instance running in the service cluster.
In your example, it would be serviceA_createOrder.

Drop all but one node from Service Discovery

We use the Consul Service Discovery mechanism to fetch a list of proxies through which we scrape certain targets. There are multiple proxies for redundancy but ultimately they all provide the exact same information.
Now we'd like have the relabeling always drop all but one (random) node returned from SD. It must not be hardcoded as the names and number of proxies can and will change.
After looking at the relabeling implementation I don't think this is possible, but maybe there is some clever hack to achieve this.
Question: Is it possible to drop all but one (random) node from Prometheus Service Discovery?
This is not possible. I'd suggest putting a load balancer of some form in front of the proxies.

Resources