why the "loadBalancingPolicy“ must be used when "healthCheckConfig" in grpc - go

The code file is:
client
and server
Doubtful code:
var serviceConfig = `{
"loadBalancingPolicy": "round_robin",
"healthCheckConfig": {
"serviceName": ""
}
}`
Test steps:
1.Run only one server and one client
2.When using "loadBalancingPolicy": "round_robin", the client can detect the "status=NOT_SERVING" of the server
3.When "loadBalancingPolicy": "round_robin" is deleted, or "pick_first" is used, the "status=NOT_SERVING" of the server cannot be detected on the client side

The health check meaningful when has multiple server addresses. If only has one address, there is no need to check health status. So the load balance policy round_robin is work together with health check.
The round_robin will check health status, so it will send request to READY address one after another.
The pick_first policy not support health check, so it will use first success connectted server. So there will only use specify address for any request.
You can read the document of health check and load balance policy in LB Policies Can Disable Health Checking When Needed.
For debug the client and server, you can add environment variable GRPC_GO_LOG_SEVERITY_LEVEL=info and GRPC_GO_LOG_VERBOSITY_LEVEL=99 for more detail of transport and connection event.

When I read the source code carefully, I understood the internal implementation.
pick_first
It implements "balancer.Builder" and "balancer.Balancer" by itself.
"ResolverState.Addresses" will only create a SubConn, there is an addrConn in SubConn, create ClientTransport with the first addr.
Returns a fixed "balancer.PickResult" each time Pick() is called.
round_robin
Pass in the parameter "HealthCheck: true" and return baseBuilder as Builder through "base.NewBalancerBuilder()".
Each addr of "ResolverState.Addresses" will create a corresponding SubConn.
Each time Pick() is called, change the internal next value, get it from "[]balancer.SubConn", and return a new "balancer.PickResult".

Related

Masstransit EndpointConvention Azure Service Bus

I'm wondering if I'm doing something wrong, I expected MassTransit would automatically register ReceiveEndpoints in the EndpointConvention.
Sample code:
services.AddMassTransit(x =>
{
x.AddServiceBusMessageScheduler();
x.AddConsumersFromNamespaceContaining<MyNamespace.MyRequestConsumer>();
x.UsingAzureServiceBus((context, cfg) =>
{
// Load the connection string from the configuration.
cfg.Host(context.GetRequiredService<IConfiguration>().GetValue<string>("ServiceBus:ConnectionString"));
cfg.UseServiceBusMessageScheduler();
// Without this line I'm getting an error complaining about no endpoint convention for x could be found.
EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name"));
cfg.ReceiveEndpoint("queue-name", e =>
{
e.MaxConcurrentCalls = 1;
e.ConfigureConsumer<MyRequestConsumer>(context);
});
cfg.ConfigureEndpoints(context);
});
});
I thought this line EndpointConvention.Map<MyRequest>(new Uri("queue:queue-name")); wouldn't be necessary to allow sending to the bus without specifing the queue name, or am I missing something?
await bus.Send<MyRequest>(new { ...});
The EndpointConvention is a convenience method that allows the use of Send without specifying the endpoint address. There is nothing in MassTransit that will automatically configured this because, frankly, I don't use it. And I don't think anyone else should either. That stated, people do use it for whatever reason.
First, think about the ramifications - if every message type was registered as an endpoint convention, what about messages that are published and consumed on multiple endpoints? That wouldn't work.
So, if you want to route messages by message type, MassTransit has a feature for that. It's called Publish and it works great.
But wait, it's a command, and commands should be Sent.
That is true, however, if you are in control of the application and you know that there is only one consumer in your code base that consumes the KickTheTiresAndLightTheFires message contract, publish is as good as send and you don't need to know the address!
No, seriously dude, I want to use Send!
Okay, fine, here are the details. When using ConfigureEndpoints(), MassTransit uses the IEndpointNameFormatter to generate the receive endpoint queue names based upon the types registered via AddConsumer, AddSagaStateMachine, etc. and that same interface can be used to register your own endpoint conventions if you want to use Send without specifying a destination address.
You are, of course, coupling the knowledge of your consumer and message types, but that's your call. You're already dealing with magic (by using Send without an explicit destination) so why not right?
string queueName = formatter.Consumer<T>()
Use that string for the message types in that consumer as a $"queue:{queueName}" address and register it on the EndpointConvention.
Or, you know, just use Publish.

Disable client extension if server does not accept extensions

I have a WebSocket client that uses the netty WebSocketClientCompressionHandler to support compression extension. For this extension to work properly I need to set the allowExtensions value to true when creating a newHandshaker using the WebSocketClientHandshakerFactory.
At times when the server does not support these extensions it responds without a Sec-WebSocket-Extensions. If that is the case if reserved (RSV) bits are used, the client should terminate the connection immediately.
Since I am creating the WebSocketClientHandshaker before I could get any response from the server I am unable to set the value of allowExtensions to false afterwards when I come to know that the server does not support extensions.
Is it in anyway possible to set the value of allowExtensions to false after I receive the response from server (or inform netty) so that netty will close the connection if RSV bit is set due to protocol violation?
(For the server implementation I do check the client request headers for Sec-WebSocket-Extensions before creating the handshaker which is fine.)
The only solution I had was to replace the WebSocketFrameDecoder after finishing the handshake and set the allowExtensions value to false if the handshake response does not have the extension header:
handshaker.finishHandshake(ctx.channel(), handshakeResponse);
Channel channel = ctx.channel();
String extensionsHeader = handshakeResponse.headers().getAsString(HttpHeaderNames.SEC_WEBSOCKET_EXTENSIONS);
if (extensionsHeader == null) {
// This replaces the frame decoder to make sure the rsv bits are not allowed
channel.pipeline().replace(WebSocketFrameDecoder.class, "ws-decoder",
new WebSocket13FrameDecoder(false, false, handshaker.maxFramePayloadLength(),
false));
}

What is "configuredOnly" used for in ConnectionMultiplexer.GetEndPoints()?

I am using the fantastic StackExchange.Redis library to implement ObjectCache. One of the interface methods to implement in ObjectCache is long GetCount(...) which returns the number of keys in the database. It looks like this can be satisfied by the IServer.DatabaseSize(...) method in StackExchange.Redis.
I plan on fetching the server endpoints from ConnectionMultiplexer.GetEndPoints(), getting an IServer for each endpoint, and then querying the database size for each database I am interested in on each server (ignore size discrepancies for the moment).
Now, ConnectionMultiplexer.GetEndPoints() has an optional parameter called "configuredOnly". What is the consequence of not providing it, versus true, versus false?
In the ConnectionMultiplexer.GetEndPoints() implementation, I see that it returns the EndPoints from the multiplexer configuration if configuredOnly is true, or else returns EndPoints from an array called "serverSnapshot".
As best as I can tell, "serverSnapshot" is populated here, which seems to be populated as servers are connected, or at least are attempted to be connected to.
Does GetEndPoints(true) return all EndPoints that were configured on the ConnectionMultiplexer? Does GetEndPoints() and GetEndPoints(false) return EndPoints that actually are connected/valid? The documentation for the GetEndPoints method with respect to the configuredOnly parameter is sparse, and my subsequent use of the returned EndPoints needs one behavior and not the other.
When configuredOnly is set to true, GetEndPoints() only returns endpoints for the Redis servers explicitly specified in the call to ConnectionMultiplexer.Connect(). Alternately when configuredOnly is false, endpoints are returned for every Redis servers in the cluster, whether or not they were specified in the initial ConnectionMultiplexer.Connect() call.
Somewhat strangly, if you use DNS names in the ConnectionMultiplexer.Connect() call, GetEndPoints(false) will return rows for both the DNS name and also the resolved IP address. For example, with a six-node Redis cluster the following code:
ConnectionMultiplexer redis = ConnectionMultiplexer("localhost:6379,localhost:6380");
foreach (var endpoint in redis.GetEndPoints(false))
{
Console.WriteLine(endpoint.ToString());
}
will output
$127.0.0.1:6379
Unspecified/localhost:6379
Unspecified/localhost:6380
127.0.0.1:6380
127.0.0.1:6381
127.0.0.1:6382
127.0.0.1:6383
127.0.0.1:6384
If I had called redis.GetEndPoints(true), only Unspecified/localhost:6379 and Unspecified/localhost:6380 would be returned.

Using Sessions in synchronous Request-Response patterns

I am trying to get sessions to work in the following architecture.
Multiple heterogenous worker roles that monitor and process requests from queue1, and send their responses to queue2.
One front web role, which receives requests from outside via REST or SOAP, submits them into queue1, and waits for a response from queue2. Once it's received, the response is returned to the caller.
The web role is there to leverage scalability and allow the worker roles to be created dynamically when the load is too high (which is why the entire Ruth Goldberg machine, there is no way without the service bus).
I am using a call to:
MessageSession sess = myQueueClient.AcceptMessageSession(mySessionId, TimeSpan.FromSeconds(timeoutPerSec));
which is followed by:
BrokeredMessage bm = sess.Receive();
and the call to AcceptMessageSession crashes and burns with the exception:
BR0012A sessionful message receiver cannot be created on an entity that does not require sessions. Ensure RequiresSession is set to true when creating a Queue or Subscription to enable sessionful behavior.
Now I do set RequiresSession to true:
if (!_queueManager.QueueExists(clientID))
_queueManager.CreateQueue(clientID).RequiresSession = true;
else
_queueManager.GetQueue(clientID).RequiresSession = true;
but it does not help.
What am I doing wrong?
You have to create a queue with RequiresSession set to true in QueueDescription when you create a queue, not in a QueueDescription of already created queue.
So you in your case queue creation should look similar to this:
if (!_queueManager.QueueExists(clientID))
{
QueueDescription queueDescription = new QueueDescription(clientID)
{
RequiresSession = true
};
_queueManager.CreateQueue(queueDescription);
}

Boost::asio UDP Broadcast with ephemeral port

I'm having trouble with udp broadcast transactions under boost::asio, related to the following code snippet. Since I'm trying to broadcast in this instance, so deviceIP = "255.255.255.255". devicePort is a specified management port for my device. I want to use an ephemeral local port, so I would prefer if at all possible not to have to socket.bind() after the connection, and the code supports this for unicast by setting localPort = 0.
boost::asio::ip::address_v4 targetIP = boost::asio::ip::address_v4::from_string(deviceIP);
m_targetEndPoint = boost::asio::ip::udp::endpoint(targetIP, devicePort);
m_ioServicePtr = boost::shared_ptr<boost::asio::io_service>(new boost::asio::io_service);
m_socketPtr = boost::shared_ptr<boost::asio::ip::udp::socket>(new boost::asio::ip::udp::socket(*m_ioServicePtr));
m_socketPtr->open(m_targetEndPoint.protocol());
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this))); // Start thread running io_service process
No matter what I do in terms of the following settings, the transmit is working fine, and I can use Wireshark to see the response packets coming back from the device as expected. These response packets are also broadcasts, as the device may be on a different subnet to the pc searching for it.
The issues are extremely strange to my mind, but are as follows:
If I specify the local port and set m_forceConnect=false, everything works fine, and my recieve callback fires appropriately.
If I set m_forceConnect = true in the constructor, but pass in a local port of 0, the transmit works fine, but my receive callback never fires. I would assume this is because the 'target' (m_targetEndpoint) is 255.255.255.255, and since the device has a real IP, the response packet gets filtered out.
(what I actually want) If m_forceConnect = false (and data is transmitted using a send_to call), and local port = 0, therefore taking an ephemeral port, my RX callback immediately fires with an error code 10022, which I believe is an "Invalid Argument" socket error.
Can anyone suggest why I can't use the connection in this manner (not explicitly bound and not explicitly connected)? I obviously don't want to use socket.connect() in this case, as I want to respond to anything I receive. I also don't want to use a predefined port, as I want the user to be able to construct multiple copies of this object without port conflicts.
As some people may have noticed, the overall aim of this is to use the same network-interface base-class to handle both the unicast and broadcast cases. Obviously for the unicast version, I can perfectly happily m_socket->connect() as I know the device's IP, and I receive the responses since they're from the connected IP address, therefore I set m_forceConnect = true, and it all just works.
As all my transmits use send_to, I have also tried to socket.connect(endpoint(ip::addressv4::any(), devicePort), but I get a 'The requested address is not valid in its context' exception when I try it.
I've tried a pretty serious hack:
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), m_socketPtr->local_endpoint().port());
m_socketPtr->bind(localEndpoint);
where I extract the initial ephemeral port number and attempt to bind to it, but funnily enough that throws an Invalid Argument exception when I try and bind.
OK, I found a solution to this issue. Under linux it's not necessary, but under windows I discovered that if you are neither binding nor connecting, you must have transmitted something before you make the call to asynch_recieve_from(), the call to which is included within my this->asynch_receive() method.
My solution, make a dummy transmission of an empty string immediately before making the asynch_receive call under windows, so the modified code becomes:
m_socketPtr->set_option(boost::asio::socket_base::broadcast(true));
// If no local port is specified, default parameter is 0
// If local port is specified, bind to that port.
if(localPort != 0)
{
boost::asio::ip::udp::endpoint localEndpoint(boost::asio::ip::address_v4::any(), localPort);
m_socketPtr->bind(localEndpoint);
}
if(m_forceConnect)
m_socketPtr->connect(m_targetEndPoint);
// A dummy TX is required for the socket to acquire the local port properly under windoze
// Transmitting an empty string works fine for this, but the TX must take place BEFORE the first call to Asynch_receive_from(...)
#ifdef WIN32
m_socketPtr->send_to(boost::asio::buffer("", 0), m_targetEndPoint);
#endif
this->AsyncReceive(); // Register Asynch Recieve callback and buffer
m_socketThread = boost::shared_ptr<boost::thread>(new boost::thread(boost::bind(&MyNetworkBase::RunSocketThread, this)));
It's a bit of a hack in my book, but it is a lot better than implementing all the requirements to defer the call to the asynch recieve until after the first transmission.

Resources