Given two or more finagle clients that have different destination names, if those names happen to resolve to the same inet address how to I get finagle to only maintain a single pool of connections to that endpoint?
Overly Simple Sample Code
The below code registers a very simple (and mostly useless) Resolver that always resolves to the same address. In practice this would be more like a many to one relationship (instead of all to one). There's another simple application that starts a server and two clients that both use the resolver to find the address to talk to the server.
// Registered with finagle Resolver via META-INF/services
class SmartResolver extends AbstractResolver {
#Override public String scheme() { return "smart"; }
#Override public Var<Addr> bind(String arg) {
return Vars.newConstVar(Addrs.newBoundAddr(Addresses.newInetAddress(
// assume this is more complicated and maps many names to
// one address
new InetSocketAddress("127.0.0.1", 9000)))));
}
}
class Main {
public static void main(String[] args) {
// Create a server, we record stats so we can see how many connections
// there are
InMemoryStatsReceiver stats = new InMemoryStatsReceiver();
ListeningServer server = Thrift.server()
.withStatsReceiver(stats)
.serveIface(":9000", (EchoService.ServiceIface) Future::value);
// create a few clients that all connect to a resolved server, the
// resolver ensures that they all are communicating with our server
EchoService.ServiceIface c1 = Thrift.client()
.newIface("smart!c1", EchoService.ServiceIface.class);
EchoService.ServiceIface c2 = Thrift.client()
.newIface("smart!c2", EchoService.ServiceIface.class);
// make sure any lazy connections have been opened
Await.result(c1.echo("c1"));
Await.result(c2.echo("c2"));
// I'm not sure how to see how many physical connections there are
// incoming to the server, or if it's possible to do this.
assertEquals(1, stats.counter(JavaConversions.asScalaBuffer(Arrays.asList("connects"))))
Await.result(server.close());
}
}
// echo.thrift
service EchoService {
string echo(1: string quack);
}
All the details
Where I work we have a microservice architecture using finagle and thrift, we use Consul for service discovery. Some of the external systems we interact with are very restrictive about the number and frequency of tcp connections they accept, for this reason some service instances are 'assigned responsibility' for these connections. To make sure that requests that need to use specific connections are sent to the correct service, that service registers a new service name in consul representing the connection it is responsible for. Clients then lookup the service by the connection name instead of the service name.
To make that clearer: Say you have a device-service that opens TCP connections to a configured list of devices, the devices only support a single connection at a time. You might have a few instances of this device-service with some scheme in place to divide the devices between the device-service instances. So instance A connects to devices foo and bar, instance B connects to baz.
You now have another service, say poke-service that needs to talk to specific devices (via the device-service). The way I've solved this issue is to have the device-service instance A register foo and bar, and instance B register baz, all against their own local address. So looking up foo resolves to the address for device-service instance A instead of the more generic cluster of all device-service instances.
The problem I'd like to solve is an optimisation problem, I'd like finagle to recognise that both foo and bar actually resolve to the same address and to reuse all the resources that it allocates and maintains as part of the connection. Additionally if foo and bar get re-assigned to different device-service instances, I'd like everything to just work based on the information in Consul which would reflect this change.
Related
We have UserCreated event that gets published from UserManagement.Api. I have two other Apis, Payments.Api and Notification.Api that should react to that event.
In both Apis I have public class UserCreatedConsumer : IConsumer<UserCreated> (so different namespaces) but only one queue (on SQS) gets created for both consumers.
What is the best way to deal with this situation?
You didn't share your configuration, but if you're using:
x.AddConsumer<UserCreatedConsumer>();
As part of your MassTransit configuration, you can specify an InstanceId for that consumer to generate a unique endpoint address.
x.AddConsumer<UserCreatedConsumer>()
.Endpoint(x => x.InstanceId = "unique-value");
Every separate service (not an instance of the same service) needs to have a different queue name of the receiving endpoint, as described in the docs:
cfg.ReceiveEndpoint("queue-name-per-service-type", e =>
{
// rest of the configuration
});
It's also mentioned in the common mistakes article.
I'm trying to understand the correct configuration and usage pattern of LoadbalanceRSocketClient in a context of SpringBoot application (RSocketRequester).
I have two RSocket server backends (SpringBoot, RSocket messaging) running and configuring the RSocketRequester on a client side like this:
List<LoadbalanceTarget> servers = new ArrayList<>();
for (String url: backendUrls) {
HttpClient httpClient = HttpClient.create()
.baseUrl(url)
.secure(ssl ->
ssl.sslContext(SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE)));
servers.add(LoadbalanceTarget.from(url, WebsocketClientTransport.create(httpClient, url)));
}
// RSocketRequester.Builder is autowired by Spring boot
RSocketRequester requester = builder
.setupRoute("/connect")
.setupData("test")
//.rsocketConnector(connector -> connector.reconnect(Retry.fixedDelay(60, Duration.ofSeconds(1))))
.transports(Flux.just(servers), new RoundRobinLoadbalanceStrategy());
Once configured, the requester is being used repeatedly form the timer loop, as following:
#Scheduled(fixedDelay = 10000, initialDelay = 1000)
public void timer() {
requester.route("/foo").data(Data).send().block();
}
It works - client starts, connects to one of the servers and pushes messages to it. If I kill the server that clients connected to, client reconnects to another server on the next timer event. If I start first server again and kill a second one though, client doesn't connect anymore and the following exeption is observed on a client side:
java.util.concurrent.CancellationException: Pool is exhausted
at io.rsocket.loadbalance.RSocketPool.select(RSocketPool.java:202) ~[rsocket-core-1.1.0.jar:na]
at io.rsocket.loadbalance.LoadbalanceRSocketClient.lambda$fireAndForget$0(LoadbalanceRSocketClient.java:49) ~[rsocket-core-1.1.0.jar:na]
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onNext(FluxContextWrite.java:107) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:220) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.MonoZip$ZipCoordinator.signal(MonoZip.java:251) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.MonoZip$ZipInner.onNext(MonoZip.java:336) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1784) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.MonoCallable.subscribe(MonoCallable.java:61) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3987) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.MonoZip.subscribe(MonoZip.java:128) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.Mono.subscribe(Mono.java:3987) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.Mono.block(Mono.java:1678) ~[reactor-core-3.4.0.jar:3.4.0]
I suspect that I'm either not configuring the requester correctly or not using it properly. Would appreciate any hints as documentation and tests are seems to be pretty thin in this area.
Ideally I would want a client to transparently switch to any next available server upon server/connectivity failure. Right now re-connection attempt seems to be happening only on the next call to timer() method, which is not ideal as client needs to handle incoming messages from the server. Another thing I observed is that even so "/foo" is a FnF route, unless I do block() after a send() server never receives the call.
Update Endpoints List Continuously
LoadbalanceClient is designed to be integrated with the Discovery service which is responsible for keeping a List of alive Instances. That said if one of the services disappears from the cluster, the Discovery service updates its List of available Instances.
On the other hand, to implement client-side loadblancing, we have to know the list of available services in the cluster. It is obvious, that to setup loadbalancing, we can retrieve the list of services and supply it to the Loadbalancer API.
ReactiveDiscoveryClient discoveryClient = ...
Mono<List<LoadbalanceTarget>> serversMono = discoveryClient
.getInstances(serviceGroupName)
.map(si -> {
HttpClient httpClient = HttpClient.create()
.baseUrl(si.getUri())
.secure(ssl -> ssl.sslContext(
SslContextBuilder.forClient()
.trustManager(InsecureTrustManagerFactory.INSTANCE)
));
return LoadbalanceTarget.from(si.getUri(), WebsocketClientTransport.create(httpClient, "/rsocket")));
})
.collectList()
// RSocketRequester.Builder is autowired by Spring boot
RSocketRequester requester = builder
.setupRoute("/connect")
.setupData("test")
.transports(serversMono.flux(), new RoundRobinLoadbalanceStrategy());
However, imagine that we are in a fully distributed environment, and now every service that disappears and appears again - runs on the absolutely new host and port (e.g. kubernates cluster which does not stick to a particular IP address). That said, Loadbalancing has to consider such a scenario and to avoid dead nodes in the pool, it removes unhealthy nodes from the pool completely.
Now, if all the nodes disappeared and appeared after some time, they are not included in the pool anymore (and if the Flux, which provides updates is completed, effectively, the pool is exhausted because no new update will come in from the Flux<List<LodbalanceTarget>>).
However, the nodes register themselves into the Discovery service and become available for observation. All that said we have to periodically pull info from the Discovery service to be up to date and update pool state continuously
ReactiveDiscoveryClient discoveryClient = ...
Flux<List<LoadbalanceTarget>> serversFlux = discoveryClient
.getInstances(serviceGroupName)
.map(si -> {
HttpClient httpClient = HttpClient.create()
.baseUrl(si.getUri())
.secure(ssl -> ssl.sslContext(
SslContextBuilder.forClient()
.trustManager(InsecureTrustManagerFactory.INSTANCE)
));
return LoadbalanceTarget.from(si.getUri(), WebsocketClientTransport.create(httpClient, "/rsocket")));
})
.collectList()
.repeatWhen(f -> f.delayElements(Duration.ofSeconds(1))) // <- continuously retrieve new List of ServiceInstances
// RSocketRequester.Builder is autowired by Spring boot
RSocketRequester requester = builder
.setupRoute("/connect")
.setupData("test")
.transports(servers, new RoundRobinLoadbalanceStrategy());
With such a setup, the RSocketPool will not be exhausted if all the nodes disappear from the cluster, because the Flux<List<LoadbalanceTraget>> has not completed yet and may provide new updates eventually.
Note, the implementation is smart enough to keep active nodes on every update from the discovery service. That said if there is such a service instance in the pool, you will not get 2 connections at the same time.
Side note on reconnect feature
You may notice, that RSocketConnector provides such a great feature called .reconnect. At first glance, it may seem that the usage of reconnect will keep your connection up and running infinitely. Unfortunately, that is not true. The .reconnect feature is designed to keep your Mono<RSocket> reusable with cache semantic, which means that you may create a #Bean Mono<RSocket> ... and autowire it in a various place and subscribe multiple times without worrying that the result RSocket instance will be different on every Mono<RSocket>.subscribe. On the other hand, .reconnect, if given RSocket becomes disconnected (e.g. lost connection case) the next subscription to such a Mono<RSocket> will resistible a new RSocket only once for all concurrent .subscribe calls.
Though it sounds useful feature, in RSocketPool we do not rely on it much and use Mono<RSocket> only once to resolve and cache an instance of RSocket inside RSocketPool. That said if such RSocket will be disconnected, we will not be trying to subscribe to the given Mono<RSocket> again (we assume, that set up host and port will be changed)
For the question around FnF, this is part of the Rx model. Without a subscribe the event doesn't happen. You are free to call an API returning a Mono without side effects before the subscribe, any other behaviour is a bug.
/**
* Perform a Fire-and-Forget interaction via {#link RSocket#fireAndForget(Payload)}. Allows
* multiple subscriptions and performs a request per subscriber.
*/
Mono<Void> fireAndForget(Mono<Payload> payloadMono);
If you call this method once, and then subscribe 3 times on the result it will execute it 3 times.
Oleh, I tried what you suggested and it works to some extent, although I still can't quite get the behavior I need.
What I want to do is:
Client connects to a single (random) backend at a time
If backend or connectivity to the backend fails, client should try to connect to the next available backend.
I guess I can't use RoundRobinLoadbalanceStrategy as it connects the client to all available backends. Should I use WeightedLoadbalanceStrategy instead? Or should discoveryClient abstraction only return a single server every time - but that no longer would be a 'pool' client, right?
Perhaps I should re-think by approach in general. I have a few dozens of thousands of clients so I want to balance the load on the back end - spread it across multiple instances of the backend, so each client randomly connects to one instance of the backend but is capable of re-connecting to another instance, if instance it conneced to fails. I assume that this is not a good idea to connect all clients to every backend instance at the same time, but maybe I'm wrong?
I am using the fantastic StackExchange.Redis library to implement ObjectCache. One of the interface methods to implement in ObjectCache is long GetCount(...) which returns the number of keys in the database. It looks like this can be satisfied by the IServer.DatabaseSize(...) method in StackExchange.Redis.
I plan on fetching the server endpoints from ConnectionMultiplexer.GetEndPoints(), getting an IServer for each endpoint, and then querying the database size for each database I am interested in on each server (ignore size discrepancies for the moment).
Now, ConnectionMultiplexer.GetEndPoints() has an optional parameter called "configuredOnly". What is the consequence of not providing it, versus true, versus false?
In the ConnectionMultiplexer.GetEndPoints() implementation, I see that it returns the EndPoints from the multiplexer configuration if configuredOnly is true, or else returns EndPoints from an array called "serverSnapshot".
As best as I can tell, "serverSnapshot" is populated here, which seems to be populated as servers are connected, or at least are attempted to be connected to.
Does GetEndPoints(true) return all EndPoints that were configured on the ConnectionMultiplexer? Does GetEndPoints() and GetEndPoints(false) return EndPoints that actually are connected/valid? The documentation for the GetEndPoints method with respect to the configuredOnly parameter is sparse, and my subsequent use of the returned EndPoints needs one behavior and not the other.
When configuredOnly is set to true, GetEndPoints() only returns endpoints for the Redis servers explicitly specified in the call to ConnectionMultiplexer.Connect(). Alternately when configuredOnly is false, endpoints are returned for every Redis servers in the cluster, whether or not they were specified in the initial ConnectionMultiplexer.Connect() call.
Somewhat strangly, if you use DNS names in the ConnectionMultiplexer.Connect() call, GetEndPoints(false) will return rows for both the DNS name and also the resolved IP address. For example, with a six-node Redis cluster the following code:
ConnectionMultiplexer redis = ConnectionMultiplexer("localhost:6379,localhost:6380");
foreach (var endpoint in redis.GetEndPoints(false))
{
Console.WriteLine(endpoint.ToString());
}
will output
$127.0.0.1:6379
Unspecified/localhost:6379
Unspecified/localhost:6380
127.0.0.1:6380
127.0.0.1:6381
127.0.0.1:6382
127.0.0.1:6383
127.0.0.1:6384
If I had called redis.GetEndPoints(true), only Unspecified/localhost:6379 and Unspecified/localhost:6380 would be returned.
I have been using the AKKA remoting feature. It has been working very well except for one issue. If I try to lookup a remote actor based on its hostname, the lookup fails. However, if I do it based on IP address it works fine. Is there any way to make it work uniformly for both hostname and IP address ?
My application.conf is something like below:
akka {
version = "2.0.2"
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
...
use-passive-connections = off
hostname = ""
port = 8000
...
}
...
}
}
From another machine:
actorSystem.actorFor("akka://MyActorSystem#10.0.0.1:8000/user/MyActor") //**Works**
actorSystem.actorFor("akka://MyActorSystem#hostname.abc.com:8000/user/MyActor") //**Fails**
I have seen this behavior as well. I expected the Akka actor system to pick up the system host name from java.net.InetAddress.getLocalHost.getHostName.
However this does not always seem to work. On the upside, adding the hostname to your application.conf file should produce the correct result, allowing remote connections looked up via context.actorFor("akka://host.domain.com:8000/user/whatever").
Even when the InetAddress call above produces the desired host name string, e.g. as easily checked in the Scala REPL, Akka seems to prefer binding to the IP Address. If you want Akka to bind to the alias automatically on system start you might consider modifying your Config object (set akka.remote.netty.hostname) before passing it to ActorSystem.apply.
Of course the other, perhaps less desirable, option is to simply set the value on each of your deployed nodes.
Hope that helps!
I have a Cocoa client and server application that communicate using distributed objects implemented via NSSocketPorts and NSConnections in the standard way. The server vends a single object to the client applications, of which there may be several. Each client application can gain access to the same distributed object getting its own proxy.
The vended object supports a particular protocol, which includes a method similar to the following:
#protocol VendedObjectProtocol
- (void) acquireServerResource:(ServerResourceID)rsc;
#end
When a client application invokes this method, the server is supposed to allocate the requested resource to that client application. But there could be several clients that request the same resource, and the server needs to track which clients have requested it.
What I'd like to be able to do on the server-side is determine the NSConnection used by the client's method invocation. How can I do that?
One way I have thought about is this (server-side):
- (void) acquireServerResource:(ServerResourceID)rsc withDummyObject:(id)dummy {
NSConnection* conn = [dummy connectionForProxy];
// Map the connection to a particular client
// ...
}
However, I don't really want the client to have to pass through a dummy object for no real purpose (from the client's perspective). I could make the ServerResourceID be a class so that it gets passed through as a proxy, but I don't want to really do that either.
It seems to me that if I were doing the communications with raw sockets, I would be able to figure out which socket the message came in on and therefore be able to work out which client sent it without the client needing to send anything special as part of the message. I am needing a way to do this with a distributed objects method invocation.
Can anyone suggest a mechanism for doing this?
What you are looking for are NSConnection's delegate methods. For example:
- (BOOL)connection:(NSConnection *)parentConnection shouldMakeNewConnection:(NSConnection *)newConnnection {
// setup and map newConnnection to a particular client
id<VendedObjectProtocol> obj = //...
obj.connection = newConnection;
return YES;
}
You could design an object for each individual connection (like VendedObjectProtocol) and get the connection with self.connection.
- (void) acquireServerResource:(ServerResourceID)rsc {
NSConnection* conn = self.connection;
// Map the connection to a particular client
// ...
}
Or
You can make use of conversation tokens using +createConversationForConnection: and +currentConversation