Are there some other formats available which we can use instead of protobuf in grpc?
The gRPC stack has no strict dependency on the marshaller/serializer being used. All that gRPC sees is a binary buffer, with entirely opaque contents (it doesn't even specify a content-type header), sent over HTTP/2 routes.
By convention, gRPC is described by a .proto schema, which defines the gRPC methods and the payload messages, which then generates binding code using protocol buffers for the marshaller/serializer.
However, if you're willing to write the binding code yourself (or use a library that does), you can register gRPC endpoints using your own marshaller/serializer. The exact details of how to do this will vary between platforms/languages/libraries, but yes: it is possible. Since no metadata (headers, etc) is used to resolve the marshaller/serializer, both client and server must agree in advance what format is going to be used for the payload.
The gRPC protocol is agnostic to the marshaller/IDL, but Protocol Buffers is the only marshaller directly supported by gRPC.
I'm aware of Flatbuffers and Bond supporting gRPC. There are probably others.
You are free to support your own favorite format. It isn't easy, but it isn't hard; it mainly involves glue code and defining the RPC schema for gRPC to use. The gRPC + JSON gRPC blog post walks through the process for grpc-java. Each language is a bit different and has to be supported individually.
Related
I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.
I am evaluating using gRPC. On the subject of compatibility with 'schema evolution', i did find the information that protocol buffers, which are exchanged by gRPC for data serialization, have a format such that different evolutions of a data in protobuf format can stay compatible, as long as the schemas evolution allows it.
But that does not tell me wether two iterations of a gRPC client/server will be able to exchange a command that did not change in the schema, regardless of the changes in the rest of the schema?
Does gRPC quarantee that an older generated version of a client or server code will always be able to launch/answer a command that has not changed in the schema file, with any more recent schema generated code on the interlocutor side, reglardless of the rest of the schema? (Assuming no other breaking changes like a non backward compatible gRPC version change)
gRPC has compatibility for a method as long as:
the proto package, service name, and method name is unchanged
the proto request and response messages are still compatible
the cardinality (unary vs streaming) of the request and response message remains unchanged
New services and methods can be added at will and not impact compatibility of preexisting methods. This doesn't get discussed much because those restrictions are mostly what people would expect.
There is actually some wiggle room in changing cardinality as unary is encoded the same as streaming on-the-wire, but it's generally just better to assume you can't change cardinality and add a new method instead.
This topic is discussed in the Modifying gRPC Services over Time talk (PDF slides and Youtube links available). I'll note that the slides are not intended to stand alone.
gRPC is a "general RPC framework" which uses ProtoBuffer to serialize and deserialize while the net/rpc package seems could do "nearly" the same thing with encoding/gob and both are under the umbrella of Google.
So what's the difference between them? What pros and cons dose choosing one of them have?
Well, you have said it yourself. gRPC is a framework that uses RPC to communicate. RPC is not Protobuf but instead Protobuf can use RPC and gRPC is actually Protobuf over RPC.
You don't need to use Protobuf to create RPC services within your app. This is a good idea if you are doing libraries/apps from small to medium size. Also you don't need to learn the syntax of Protobuf to create your own services.
But, Protobuf is much faster than REST. It is a much more convenient way to communicate with the downside of the learning curve of the Protobuf syntax. Also, you can use Protobuf to generate the codebase in more languages than simply Go. So if you have some kind of service in Java, you can use Protobuf to generate RPC calls between them easily while if you use the net/rpc package you'll have to implement them twice (once in Go and once in Java)
In general, I will use Protobuf to nearly all. This gives you confidence to use it at more large scale or complex projects.
Thrift sounds awesome but can't find some basic stuff I'm used to in RPC frameworks (as HttpServlet). Example of the things I can't find: session management, filtering, upload/download progress.
I understand that the missing stuff might be a management layer on top of Thrift. If so, any example of such a layer? Perhaps AOP (Aspect Oriented)?
I can't imagine such a layer that compiles to all languages and that's I'm missing. Taking session management as an example, there might be several clients that all need to do some authentication and pass the session_id upon each RPC. I would expect a similar API for all languages doing so.
Anyone knows of a a management layer for Thrift?
So thrift itself is not going to help you out a lot here.
I have had similar desires, and have a few suggestions:
1. Put your management objects into the IDL
Simply add an api token or common transfer data struct as a parameter to all of your service methods. Set it as parameter id 15 so that it will always be the last parameter, even if you add others in the middle.
As the first step in your handler you can validate/store/do whatever with the extra data.
This has the advantage that it is valid in any platform that thrift supports.
2. Use thrift over http
If you use http as your transport, you can include whatever data as you want as http headers, and the thrift content as the body.
This will often require a custom http client for every platform you use to inject the data, and a custom handler on the server to use the data, but neither of those are prohibitively difficult.
3. Hack the protocol
It is possible to create your own custom protocol that wraps another protocol and injects custom data. Take a look at how the multiplexed protocol works in the thrift library for most languages:
c# here. It sends the method name across the wire as service:method. The multiplexed processor unwraps this encoding and passes it on to the appropriate processor.
I have used a similar method to encode arbitrary key/value pairs (like http headers) inside the method name.
The downside to this is that you need to write a more complicated extension for each platform you will be using. Once. It varies a bit from language to language how this works, but it is generally simple enough once you figure it out once.
These are just a few ideas I have had, and I am sure there are others. The nice thing about thrift is how the individual components are decoupled from each other. If you have special needs you can swap any of them out as you need to to add specific functionality.
I searched on the internet but couldn't find anything useful. First, I was thinking to use Protocol Buffers but it doesn't provide built in feature to track multiple messages (where one message finish and second starts) or message self delimiting, but I read about this feature in Thrift white paper and it seems good to me. Now I am thinking to use Thrift instead of Protocol Buffers.
I am working on custom protocol for that I don't require RPC, could someone suggest if I can use Thrift without RPC (as its in the Protocol Buffers, one simply use the streams function) and some starting point as thrift documentation is a bit cumbersome.
Thanks!
Yes, It is possible. A similar answer is given Here. Apache thrift can be used without RPC you can simply use transport and protocol layers related libraries as they are defined in the documentation.
Apache Thrift is indeed a RPC- and serialization framework. The serialization part is used as part of the RPC mechanism, but can be used standalone. For the various languages there are samples and/or supporting helper classes available. If this is not the case for your particular language, the necessary code pretty much boils down to this (pseudo code):
var data = InitializeMyDataStructure(...);
var trans = new TStreamTransport(...);
var prot = new TJSONProtocol(trans);
data.write(prot);
Both transport(s) and protocol are pluggable, so instead JSON and a stream you are free to use your own protocol, and (for example) a file transport. Or whatever else combination makes sense for your use case and is supported for your target language.
as thrift documentation is a bit cumbersome.
You are free to ask any question, be it here or in the mailing list. Furthermore, we have a nice tutorial and the Test server/client pairs are also good examples for typical use cases.