Detect server disconnect in gRPC Go client - go

I have a gRPC service simmilar to below
// The greeting service definition.
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
I need the client to maintain a long living gRPC connection to the server so that if the server goes down, the client can reconnect and issue SayHello() again.
Based on my understanding there are a few options:
Pass in a statsHandler to grpc.Dial and add retry logic in HandleConn()
Add a new ClientStreaming API that maybe sends a message every few seconds. Check for server side stream close errors and implement retry logic.
Not sure if there is a recommended way for my use case and would appreciate any help.

Related

How to send the data from a server to a client in the embedded messages in C++ using grpc?

I am implementing a simple client-server grpc-c++ based application. In the Hello rpc, I am taking the request and sending the fields of another message called SeverInfo as response. The problem is I exactly don't know how to send this ServerInfo data to a client from server side. We basically use set_fieldname(ex: set_name) for general datatypes to send the data but how should we send this serverInfo data to HelloResponse and then to HelloRequest. Can somebody please help me??
Below I am attaching the proto file.
syntax = "proto3";
package sample;
service Sample {
rpc Hello(HelloRequest) returns (HelloReply){}
}
message HelloRequest {
string name = 1;
}
message HelloReply {
ServerInfo sinfo = 1;
}
message ServerInfo {
string name = 1;
string os = 2;
}
you can define another rpc in your service definitions like
service Sample {
rpc Hello(HelloRequest) returns (HelloReply){}
rpc GetServerInfo(HelloRequest) returns (ServerInfo){}
}
would that work for you?
Here is the answer that worked for me. Thank you.
ServerInfo* serverinfo=new ServerInfo();
serverinfo->set_name("");
serverinfo->set_os("");
HelloReply* rep;
rep->set_allocated_server(serverinfo);

customization in protobuf java generated code

We have a use case where, we have many RPC defined in different-different .proto files , and we generate a java based grpc stub code by using google's protobuf-java & protoc-gen-grpc-java as gradle plugin.
The requirement is we want to generate a new Service which flips the request, response and add stream to new flipped rpc.
So for example :
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
to be converted to like
service Greeter {
// Sends a greeting
rpc SayHello (HelloRequest) returns (HelloReply) {}
rpc SayHelloStreaming (stream HelloReply) returns (stream HelloRequest) {}
}
// The request message containing the user's name.
message HelloRequest {
string name = 1;
}
// The response message containing the greetings
message HelloReply {
string message = 1;
}
In java generated code I should be having 2 services for each original service. We just want the final java generated code to be having 2 services, the parser may/may not update original .proto files.
Is this customization possible with current protoc ? Can we extend the plugin and write ours -> Can someone please give some pointers.
Your question is unclear to me.
Revising proto files is a fundamental requirement of gRPC.
The Java tutorial on https://grpc.io includes an example of adding a method to a service. In part, this is because adding|removing|updating methods|messages|fields is a common behavior.
NOTE To clarify nomenclature, in your example, you're proposing adding a method to an existing service (definition). If you consider the proto as defining an API, this represents a non-breaking change. See Versioning gRPC services for a good overview. Existing clients will continue to work (they are only aware of SayHello) while new clients will be aware of SayHelloStreaming too.

Blocking tcp packet receiving in Netty 4.x

How can I block netty to send ACK responese to client in netty 4.x ?
I'm trying to control TCP packet receive speed in netty in order to forward these packet to another server . Netty receive all client packets immediately ,but netty need more time send them out , so client think it finished after sending to netty .
So , I want to know how to block received packets when netty forwarding packets which are received before to another server .
Not sure to really understand your question. So I try to reformulate:
I suppose that your Netty server is acting as a Proxy between clients and another server.
I suppose that what you want to do is to send the ack back to the client only once you really send the forwarded packet to the final server (not necesseraly received by the final server, but at least send by Netty proxy).
If so, then you should use the future of the forwarded packet to respond back with the ack, such as (pseudo code):
channelOrCtxToFinalServer.writeAndFlush(packetToForward).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) {
// Perform Ack write back
ctxOfClientChannel.writeAndFlush(AckPacket);
}
});
where:
channelOrCtxToFinalServer is one of ChannelHandlerContext or Channel connected to your remote final server from your Netty proxy,
and ctxOfClientChannel is the current ChannelHandlerContext from your Netty handler that receive the packet from the client in public void channelRead(ChannelHandlerContext ctxOfClientChannel, Object packetToForward) method.
EDIT:
For the big file transfer issue, you can have a look at the Proxy example here.
In particular, pay attention on the following:
Using the same logic, pay attention on receiving data one by one from client:
yourServerBootstrap..childOption(ChannelOption.AUTO_READ, false);
// Allow to control one by one the speed of reception of client's packets
In your frontend handler:
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
}
And finally add, using the very same logic, the final ack to your client (ack depending of course on your protocol): (see here and here)
/**
* Closes the specified channel after all queued write requests are flushed.
*/
static void closeOnFlush(Channel ch) {
if (ch.isActive()) {
ch.writeAndFlush(AckPacket).addListener(ChannelFutureListener.CLOSE);
}
}

ZeroMQ choose recipient

I'm new to ZeroMQ (and to networking in general), and have a question about using ZeroMQ in a setup where multiple clients connect to a single server. My situation is as follows:
--1 server
--multiple clients
--Clients send messages to server: I've already figured out how to do this part.
--Server sends messages to a specific client: This is the part I'm having trouble with. When certain events get handled on the server, the server will need to send a message to a specific client -- not all clients. In other words, the server will need to be able to choose which client to send a given message to.
Right now, this is my server code:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateResponseSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.Send("ack"); // There is no overload for the 'Send'
method that takes an IP address as an argument!
}
}
}
I have a feeling that the problem is that my design is wrong, and that the ResponseSocket type isn't meant to be used in the way that I want to use it. Since I'm new to this, any advice is very much appreciated!
when using the Response socket you always replying to the client that sent you the message. So the Request-Response socket types together are just simple request response.
To more complicated scenarios you probably want to use Dealer-Router.
With router the first frame of each message is the routing id (the identity of the client that sent you the message)
so your example with router will look like:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateRouterSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
byte[] routingId = server.Receive();
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.SendMore(routingId).Send("ack");
}
}
}
I also suggest to read the zeromq guide, it will probably answer most of your questions.

Using zmq_poll and zmq_send() on a same socket

I'm confused by a warning in the api of zmq_poll: "The zmq_send() function will clear all pending events on a socket. Thus, if you use zmq_poll() to monitor input on a socket, use it before output as well, and process all events after each zmq_poll() call."
I don't understand what that means. Since events are level-triggered. If I call zmq_send() and then zmq_poll(), any pending messages in the socket's buffer should trigger the zmq_poll again immediately. Why one needs to "use it (zmq_poll) before output as well" or "process all events after each zmq_poll() call"?
I see your point, the documentation is confusing. Here's a simple test in Java using a client-side DEALER socket with a poller (from asyncsrv) . The server sends 3 messages to the client. The client polls and outputs each message it receives. I've added send() in the client to test your theory. Assuming send() clears the poller, we expect the client to output receipt of only a single message:
Server
public static void main(String[] args) {
Context context = ZMQ.context(1);
ZMQ.Socket server = context.socket(ZMQ.ROUTER);
server.bind("tcp://*:5555");
server.sendMore("clientId");
server.send("msg1");
server.sendMore("clientId");
server.send("msg2");
server.sendMore("clientId");
server.send("msg3");
}
Client
public void run() {
socket = context.socket(ZMQ.DEALER);
socket.setIdentity("clientId".getBytes());
socket.connect("tcp://localhost:5555");
ZMQ.Poller poller = new ZMQ.Poller(1);
poller.register(socket, ZMQ.Poller.POLLIN);
while (true) {
poller.poll();
if (poller.pollin(0)) {
String msg = socket.recvStr(0);
System.out.println("Client got msg: " + msg);
socket.send("whatever", 0);
}
}
}
outputs...
Client got msg: msg1
Client got msg: msg2
Client got msg: msg3
Based on the results, doing send() does not clear the poller for socket, and it should be obvious why. We configured the poller with POLLIN, meaning the poller listens for inbound messages to socket. When doing socket.send(), it creates outbound messages, on which the poller is not listening.
Hope it helps...

Resources