How Google protobuf RpcController work? - protocol-buffers

I would like to know the internals of how rpc mechanism in google protobuf works?
what does it use tcp or udp? what protocol does it use to communicate between remote machines?

Protobuf's RpcController is an abstract interface that could be implemented by multiple different RPC systems using a variety of protocols.
If you're specifically asking about gRPC -- Google's Protobuf-based RPC framework -- it sends RPCs as HTTP/2 requests/responses with Protobuf-encoded bodies, over TCP. You can read all about it on the gRPC web site.

Related

Web Sockets Protocol - Clarification needed here

I was reading the the official RFC for the Web Sockets protocol in order to implement it for learning purposes, I wanted to make things a bit different somehow but I was not sure what to make different. While I was reading the document, I came across this:
The Web Socket Protocol attempts to address the goals of existing bidirectional HTTP technologies in the context of the existing HTTP infrastructure; as such, it is designed to work over HTTP ports 80 and 443 as well as to support HTTP proxies and intermediaries, even if this implies some complexity specific to then current environment. However, the design does not limit Web Socket to HTTP, and future implementations could use a simpler handshake over a dedicated port without reinventing the entire protocol.
Does this imply a custom web socket protocol can be implemented with a non http based handshake?.
If so, does it mean a the regular Java Script Web Socket client will not work with this and I would need to implement a custom client to communicate using this protocol?

How does grpc achieve "bidirectional streaming rpc" like a websocket?

Is this bidirectional stream native to http2? I looked at various http2 client. I couldn't find any example where it allows the client and server to establish a single connection and continuously push messages from both side.
(For http2 maybe on a lower level, the communications between client/server just had one tcp connection and all the request/responses are multiplexed in it, but from application level can't find any example where you establish a single connection object, and that connection object can be reused to push messages to each other).
So how did grpc achieve "Bidirectional streaming RPCs"? Specifically in this document
https://grpc.io/docs/what-is-grpc/core-concepts/
It indicates that the server side could define a Bidirectional streaming RPC, and it allows both the client and server side to continuously push messages, and achieve features that is websocket like.
Yes, bidirectional streaming is native to HTTP/2. You can read RFC-7540 for the details of how the protocol works, but basically it allows you to create several streams on a single TCP connection, and each stream can send data in either direction independently of each other.
I'm not familiar with all of the HTTP/2 libraries out there, but I know that nghttp2 will allow this in C++, and I think Java and Go have HTTP/2 implementations in their standard libraries.

How to proxy gRPC calls

I'm trying to analyse what information an app is sending so I setup Charles but to my surprise nothing was logged out.
After decompiling the app I see that it doesn't use simple REST calls but rather a library called gRPC.
Is there a good tool out there that will allow me too see what is send out from the app?
The Mediator is a Cross-platform GUI gRPC debugging proxy like Charles but design for gRPC.
You can dump all gRPC requests without any configuration.
Mediator can render the binary message into a JSON tree, when you have the API schema.
It support decode gRPC/TLS, but you should download and install the Mediator Root Certificate to your device.
gRPC uses HTTP/2 as a transport protocol. Any proxy which supports HTTP/2 for both the front-end and back-end connections should be able to be used to inspect the packets a gRPC connection. Note, some proxies only support HTTP/2 for the front-end or back-end connections and those are incompatible with gRPC.
Envoy Proxy (https://www.envoyproxy.io/) supports proxying gRPC connections and can be configured to log out request information.
Some other example proxies include:
Nginx https://medium.com/nirman-tech-blog/nginx-as-reverse-proxy-with-grpc-820d35642bff
https://github.com/mwitkow/grpc-proxy
https://github.com/mercari/grpc-http-proxy
If you are asking for android there is a app called HttpCanry. It can log request/ respond.

What is the difference between WebSocket and STOMP protocols?

What are the major differences between WebSocket and STOMP protocols?
This question is similar to asking the difference between TCP and HTTP. I shall still try to address your question, its natural to get confused between these two terms if you are beginning.
Short Answer
STOMP is derived on top of WebSockets. STOMP just mentions a few specific ways on how the message frames are exchanged between the client and the server using WebSockets.
Long Answer
WebSockets
It is a specification to allow asynchronous bidirectional communication between a client and a server. While similar to TCP sockets, it is a protocol that operates as an upgraded HTTP connection, exchanging variable-length frames between the two parties, instead of a stream.
STOMP
It defines a protocol for clients and servers to communicate with messaging semantics. It does not define any implementation details, but rather addresses an easy-to-implement wire protocol for messaging integrations. It provides higher semantics on top of the WebSockets protocol and defines a handful of frame types that are mapped onto WebSockets frames. Some of these types are...
connect
subscribe
unsubscribe
send (messages sent to the server)
message (for messages send from the server) BEGIN, COMMIT, ROLLBACK
(transaction management)
WebSocket does imply a messaging architecture but does not mandate the use of any specific messaging protocol. It is a very thin layer over TCP that transforms a stream of bytes into a stream of messages (either text or binary) and not much more. It is up to applications to interpret the meaning of a message.
Unlike HTTP, which is an application-level protocol, in the WebSocket protocol there is simply not enough information in an incoming message for a framework or container to know how to route it or process it. Therefore WebSocket is arguably too low level for anything but a very trivial application. It can be done, but it will likely lead to creating a framework on top. This is comparable to how most web applications today are written using a web framework rather than the Servlet API alone.
For this reason the WebSocket RFC defines the use of sub-protocols. During the handshake, the client and server can use the header Sec-WebSocket-Protocol to agree on a sub-protocol, i.e. a higher, application-level protocol to use. The use of a sub-protocol is not required, but even if not used, applications will still need to choose a message format that both the client and server can understand. That format can be custom, framework-specific, or a standard messaging protocol.
STOMP — a simple, messaging protocol originally created for use in scripting languages with frames inspired by HTTP. STOMP is widely supported and well suited for use over WebSocket and over the web.
The WebSocket API enables web applications to handle bidirectional communications whereas STOMP is a simple text-orientated messaging protocol. A Bidirectional WebSocket allows a web server to initiate a new message to a client, rather than wait for the client to request updates. The message could be in any protocol that the client and server agree to.
The STOMP protocol is commonly used inside a web socket.
A good tutorial is STOMP Over WebSocket by Jeff Mesnill (2012)
STOMP can also be used without a websocket, e.g. over a Telnet connection or a message broking service.
And Raw WebSockets can be used without STOMP - Eg. Spring Boot + WebSocket example without STOMP and SockJs.
Note: Others have well explained what are both WebSocket and STOMP, so I'll try to add the missing bits.
The WebSocket protocol defines two types of messages (text and binary), but their content is undefined.
STOMP protocol defines a mechanism for client and server to negotiate a sub-protocol (that is, a higher-level messaging protocol) to use on top of WebSocket to define following things:
what kind of messages each can send,
what the format is,
the content of each message, and so on.
The use of a sub-protocol is optional but, either way, the client and the server need to agree on some protocol that defines message content.
Reference
TLDR; STOMP is a framework built on top of websockets, i.e. stomp utilizes websockets in the background. If you are thinking of building a notification/messaging system then use stomp.
https://stomp.github.io/stomp-specification-1.2.html

grpc and zeromq comparsion

I'd like to compare somehow capabilities of grpc vs. zeromq & its patterns: and I'd like to create some comparsion (feature set) - somehow - 0mq is "better" sockets - but anyways - if I apply 0mq patterns - I get comparable 'frameworks' I think - and here 0mq seems to be much more flexible ...
The main requirements are:
async req / res communication (inproc or remote) between nodes
flexible messages routing
loadbalancing support
well documented
any ideas?
thanks!
async req / res communication (inproc or remote) between nodes
Both libraries allow for synchronous or asynchronous communication depending on how to implement the communication. See this page for gRPC: http://www.grpc.io/docs/guides/concepts.html. Basically gRPC allow for typical HTTP synchronous request/response or a 'websocket-like' bidirectional streaming. For 0mq you can setup a simple REQ-REP connection which is basically a synchronous communication path, or you can create async ROUTER-DEALER type topologies.
flexible messages routing
'Routing' essentially means that a message gets from A to B via some broker. This is trivially done in 0mq and there are a number of topologies that support stuff like this (http://zguide.zeromq.org/page:all#Basic-Reliable-Queuing-Simple-Pirate-Pattern). In gRPC the same sort of topology could be created with a 'pub-sub' like connection over a stream. gRPC supports putting metadata in the message (https://github.com/grpc/grpc-go/blob/master/Documentation/grpc-metadata.md) which will allow you to 'route' a message into a queue that a 'pub-sub' connection could pull from.
loadbalancing support
gRPC has a health check support (https://github.com/grpc/grpc/blob/master/doc/health-checking.md) but because it's HTTP/2 you'd have to have a HTTP/2 load balancer to support the health check. This isn't a huge big deal, however, because you can tie the health check to a HTTP/1.1 service which the load balancer calls. 0mq is a tcp connection which means that a load balancer would likely check a 'socket connect' in tcpmode to verify the connection. This works but it's not as nice. Again you could get slick and front-end the 0mq service with a HTTP/1.1 webserver that the load balancer reads from.
well documented
both are well documented. 0mq's documentation must be read to throughly understand the technology and is more of a higher lift.
Here's the big differences:
0mq is a tcp protocol whereas gRPC is HTTP with a binary payload.
0mq requries you design a framing protocol (frame 1 = verison, frame 2 = payload, etc.), whereas much of this work is done for you in gRPC
gRPC is transparently coverted to REST (https://github.com/grpc-ecosystem/grpc-gateway) whereas 0mq requires a middleware application if you want to talk REST to it.
gRPC uses standard tls x509 certificates (think websites) whereas 0mq uses a custom encryption/authentication protocol (http://curvezmq.org/). Prior to 4.x there was no encryption support in 0mq and if you really wanted it you had to dive into this crap: https://wiki.openssl.org/index.php/BIO. (trust me don't do it)
0mq can create some pretty sick topologies (https://github.com/zeromq/majordomo) (https://rfc.zeromq.org/spec:7/MDP/) whereas gRPC is basically client/server
0mq requires much more time to build and get running whereas gRPC is basically compiling a protobuf messages and importing the service into your code.
Not quite the same. gRPC is primarily for heterogeneous service interoperability, ZeroMQ (ZMQ/0MQ/ØMQ) is a lower level messaging framework. ØMQ doesn't specify payload serialization beyond passing binary blobs whereas gRPC chooses Protocol Buffers by default. ØMQ is pretty much stuck on the same machine or machines between datacenters/clouds, whereas gRPC could potentially work on a real client too (ie mobile or web, it already supports iOS). gRPC using ØMQ could be significantly faster and more efficient for in-cloud/-datacenter services than the overhead, latency and complexity of http2 request/response chain. I'm not sure how (or even if) gRPC TLS security is adequate for public cloud and mobile/web usage, but one could always inject end-to-end security requirements (ie libsodium) at the router/controller level of the app/app framework and run in plaintext mode (which would also remove OpenSSL fork BoringSSL from causing maintenance headaches because of upstream flaws).
For very high latency / low bandwidth services (ie mission to mars), one would think about RPC using a transport like SMTP (ie Active Directory alternate replication) or MQTT (ie Facebook Messenger, ZigBee, SCADA)
Bonus (off-topic): It would be nice if gRPC had alternate pluggable transports like ØMQ (which also itself supports UNIX sockets, TCP, PGM and inproc), because HTTP/2 isn't stable in all languages yet and it's slower than ØMQ. And, it's worth looking at nanomsg (especially in the HFT world) because it could be extended with RDMA/SDP/MPI and made crazy low-latency/zero-copy/Infiniband-ready.

Resources