At my company, we've standardized on using Protocol Buffers over a message bus as a way to allow services to communicate.
This is fine, however I'm running into a problem in trying to figure out how to structure common definition files that I'd like to share amongst different teams. Is there a commonly accepted way to make collections of protocol buffer definitions available across teams?
Also, is it just a fact of life that all import headers have to refer to the directory in which the protocol buffer compiler executes? Frankly this seems a little silly since the protocol buffers allow for namespace definitions. Or is this just an artifact of the Java centric origins of Protocol Buffers?
I can only answer part of the question:
Also, is it just a fact of life that all import headers have to refer to the directory in which the protocol buffer compiler executes?
you can use --proto_path= option to specify where *.proto's exist + all the imbedded proto's
Related
I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.
If I want to write a node for a P2P application (like Bitcoin, Bitorrent, etc.) there are a lot of parts that are the same:
I need to bootstrap to the network (discover other peers)
I need to manage a list of peers, and monitor their states
I need to retrieve lists of more peers from my neighbour peers
Etc, etc.
Since I don't want to re-invent the wheel, is their a framework that I could as a sort of base library to build on?
You mention both bitcoin and bittorrent, which are quite different, so I'm assuming you don't want to be bound to any specific protocol or even serialization format.
And yet you mention peer-discovery and stats-management which are high-level concerns, be built on top of some network protocol.
But the protocol dictates how such a mechanism would work.
It sortof sounds like you're asking if there are pre-built roofs that would fit on skyscrapers just as well as on a wood cabin.
So if you actually want to design your own protocol you probably should look more at the foundation first.
which language do you want to use
what IO / event processing libraries are available
what protocol parsers and serializers are available
do you aim for throughput? low memory footprint? low latency? minimal amount of programmer-hours spent?
what kind of security is needed? heavy crypto use at the protocol level will need a trustworthy crypto library (don't roll your own!)
what kind of auxiliary things do you need (where does the data go? filesystem? databases? do you need a UI?)
Alternatively, depending how one interprets your question, if you want to write a client for a specific network then you should simply look for a library implementing the core concepts of that specific network while freeing you up to implement the rest of the application.
In bittorrent's case such an example would be libtorrent
I need to do something relatively simple, and I don't really want to install a MOM like RabittMQ etc.
There are several programs that "register" with a central
"service" server through TCP. The only function of the server is to
call back all the registered clients when they all in turn say
"DONE". So it is a kind of "join" (edit: Barrier) for distributed client processes.
When all clients say "DONE" (they can be done at totally different times), the central server messages
them all saying "ALL-COMPLETE". The clients "block" until asynchronously called back.
So this is a kind of distributed asynchronous Observer Pattern. The server has to keep track of where the clients are somehow. It is ok for the client to pass its IP address to the server etc. It is constructable with things like Boost::Signal, BOOST::Asio, BOOST::Dataflow etc, but I don't want to reinvent the wheel if something simple already exists. I got very close with ZeroMQ, but non of their patterns support this use-case very well, AFAIK.
Is there a very simple system that does this? Notice that the server can be written in any language. I just need C++ bindings for the clients.
After much searching, I used this library
https://github.com/actor-framework
It turns out that doing this with this framework is relatively straightforward. The only real "impediment" to using it is that the library seems to have gotten an API transition recently and the documentation .pdf file has not completely caught up with the source. No biggie since the example programs and the source (.hpp) files get you over this hump. However, they need to bring the docs in sync with the source. In addition, IMO they need to provide more interesting examples on how to use c++ Actors for extreme performance. For my case it is not needed, but the idea of actors (shared nothing) in this use-case is one of the reasons people use it instead shared memory communication when using threads.
Also, getting used to the syntax that the library enforces (get used to lambdas!) if one is not used to state of the art c++11 programs it can be a bit of a mind-twister at first. Then, the triviality of remembering all the clients that registered with the server was the only other caveat.
STRONGLY RECOMMENDED.
I searched on the internet but couldn't find anything useful. First, I was thinking to use Protocol Buffers but it doesn't provide built in feature to track multiple messages (where one message finish and second starts) or message self delimiting, but I read about this feature in Thrift white paper and it seems good to me. Now I am thinking to use Thrift instead of Protocol Buffers.
I am working on custom protocol for that I don't require RPC, could someone suggest if I can use Thrift without RPC (as its in the Protocol Buffers, one simply use the streams function) and some starting point as thrift documentation is a bit cumbersome.
Thanks!
Yes, It is possible. A similar answer is given Here. Apache thrift can be used without RPC you can simply use transport and protocol layers related libraries as they are defined in the documentation.
Apache Thrift is indeed a RPC- and serialization framework. The serialization part is used as part of the RPC mechanism, but can be used standalone. For the various languages there are samples and/or supporting helper classes available. If this is not the case for your particular language, the necessary code pretty much boils down to this (pseudo code):
var data = InitializeMyDataStructure(...);
var trans = new TStreamTransport(...);
var prot = new TJSONProtocol(trans);
data.write(prot);
Both transport(s) and protocol are pluggable, so instead JSON and a stream you are free to use your own protocol, and (for example) a file transport. Or whatever else combination makes sense for your use case and is supported for your target language.
as thrift documentation is a bit cumbersome.
You are free to ask any question, be it here or in the mailing list. Furthermore, we have a nice tutorial and the Test server/client pairs are also good examples for typical use cases.
I'm writing a simple golang application that needs to do some decoding of some DNS packets. I noticed that in the net library, there appears to be the perfect implementation in the form of net/dnsmsg.go which contains the right structs, pack / unpack functions etc.
However, the type is marked private (lower case dnsMsg). So it appears that I have no way of using this from within my app.
I'm quite new to golang, so don't know if my only option would be to reimplement net/dnsmsg.go myself, or if there's a better way around this.
My problem was solved by using a third party dns library, specifically miekg/dns (https://github.com/miekg/dns).
Another option would be to use Google's gopacket package which provides packets decoding for Go. In particular, the layers sub-package provides logic for decoding protocol packets, among which what is necessary to decode DNS packets.