Are there any Thrift-style RPC systems that allow callbacks? - protocol-buffers

After using several different messaging and RPC systems I have come to the conclusion that you eventually always need traditional RPC, and push events of some kind. Otherwise you inevitably end up with some polling hack.
For example, HTTP originally only supported RPC-style methods (GET and POST return a response immediately). People realised that push events were needed so hacked it using long polling. Eventually this was fixed with Server-Sent Events.
CoAP (a lightweight UDP-based version of HTTP) also supports push events by adding a 'monitor' option to GET requests. It's a pretty elegant solution.
But neither of those are Thrift-style RPC, by which I mean you write an interface definition file, and there is some tool that compiles that interface into native code for your language of choice. Thereafter you can just call remote procedures almost as if they are local ones.
So my question is, are there any Thrift-style RPC systems that let you subscribe to push events and call a callback (or similar) when an event arrives?

Yes:
gRPC supports "streaming", which means a single logical RPC call can actually involve multiple messages in each direction.
Cap'n Proto supports object capabilities, which allows either side of the connection to send an object reference to the other side, to which calls can be made. For example, the client could call a method on the server and, as one of the method parameters, provide a callback object. The callback object implements some pre-defined RPC interface. When the server calls the callback object, it is making a call back to the client. In fact, Cap'n Proto connections are fully symmetric: there is no distinction at the protocol level between client and server.
(Disclosure: I am the author of Cap'n Proto, and was also the author of Protocol Buffers v2, though I am not affiliated with gRPC.)

Related

Freeswitch Event Socket Library - is there abstraction like Session as for internal scripting languages like mod_lua?

I am trying to work with FreeSwitch using Event Socket library, and a bit surprised it has no abstraction like Session in internal scripting languages (which can be established, bridged etc using simple API). Is this the case and I have correct understanding?
If I understand well ESL allows to send API commands like originate and receive events and it's up to application to understand the state by processing events, so there are no helpers for this, correct?
So even if
Scripts using the Event Socket Library (ESL) can be run from anywhere
achieving the same results as built-in languages
it's up to application developer to implement Session abstraction on his side when using ESL, so ESL is low level interface and much more efforts are necessary to, i.g. establish call with originate, get it's state (by processing events) and then bridge it i.g. with uuid_transfer?
We are using value from "Unique-ID" header field.
The implementation is using the following library with custom addons - https://github.com/esl-client/esl-client.
PS. for LUA: How to get value of SIP header in Freeswitch?

How to provide both initial data and subsequent events via WAMP/Websockets

I have a an application from which I need to send live updates to web clients.
I'm currently happily using websockets for that, via the WAMP protocol, as it provides both publish-subscribe and RPC methods.
Now, I find that in lots of situations, when a user starts the application or a view, I need to send an initial state to the client, and then keep sending updates. I do the first with an RPC call, and the latter via publish-subscribe.
Now, this forces me to write server-side and client-side code for both of the methods, even while I'm basically conveying the same information in both cases.
On server side, I'm moving appropriate code to a common method, but I still need to take care of both sending the event and provide an entry point for the RPC call:
# RPC endpoint for getting mission info
def get_mission_info(self):
return self.get_mission_info()
# Scheduled or manually called method to send mission info to all users
def publish_mission_info(self):
self.wamp.publish("UPDATE_INFO", [self.get_mission_info()])
def get_mission_info(self):
# Here we generate a JSON serializable dict with the info
return info
And you canimagine, client side (JS or Python) shows a similar duplicity (two handler methods).
Question is: is there a more clever way of handling this, and avoiding that boilerplate code? Some approach I could follow, perhaps automatically sending last event of each type just to clients that ask for it, or that just subscribed? Perhaps something at crossbar level?
In general terms, I feel I could be doing a better state synchronization strategy leveraging these two channels (pub-sub and RPC). How does people do it?
My WAMP server is Crossbar, and my client library is autobahn.js in Python and JS.

Design an application using to support both synchronous and asynchronous calls

We are designing an API for an application where the clients (external) can
interact with it synchronously to say:
a) request a plan
b) cancel a plan etc
However once the plan is made, the decision as to whether a plan is
approved or disapproved is done asynchronously. The application itself
can send other notifications to the clients asynchronously. This part has
been implemented using spring's stomp over websocket framework. This
work perfectly fine.
Now, coming to the synchronous part of the API, the plan is to provide
a RESTful interface for the interaction. If this is done this way, the
clients will have to build two different client API's, one using http
for making RESTful calls and another using a stomp client to consume notifications.
Should we rather make it accessible via one interface?
I am not convinced of using Stomp for synchronous calls since I think the REST framework
will address the use case well. However I am concerned about the need for the clients to do both, although it is for different functionality.
Will it be okay to support both? Is this a good design practice. Can someone please advice?
HTTP based clients could a) send requests ('simple polling), in long intervalls to limit bandwidth usage, or b) use HTTP long polling (blocking) to immediately return control to the client code when the server sends for a response

How do I do multiple publishers with a single endpoint in ZeroMQ?

I'm attempting to do a pub/sub architecture where multiple publishers and multiple subscribers exist on the same bus. According to what I've read on the internet, only one socket should ever call bind(), and all others (whether pub or sub) should call connect().
The problem is, with this approach I'm finding that only the publisher that actually calls bind() on the socket ever publishes messages. All of my publishers that call connect() seem to fail silently and don't actually publish any messages to the bus. I've confirmed this isn't a subscriber key issue, as I've written a simple "sniffer" app that subscribes to all messages on the bus, and it is only showing the publisher that called bind().
If I attempt multiple binds with the publisher, the "expected" zmq behavior of silently stealing the bus occurs with ipc, and a port in use error is thrown with tcp.
I've verified this behavior with ipc and tcp endpoints, but ultimately the full system will be using epgm. I assume (though of course may be wrong) that in this situation I wouldn't need a broker since there's no dynamic discovery occurring (endpoints are known, whether ipc, tcp, or epgm multicast).
Is there something I'm missing, perhaps a socket setting, that would be causing the connecting publishers to not actually send their data? According to the literature I've seen on the internet, I'm doing things the "correct" way but it still doesn't work.
For reference, my publisher class has the following methods for setting up the endpoint:
ZmqPublisher::ZmqPublisher()
: m_zmqContext(1), m_zmqSocket(m_zmqContext, ZMQ_PUB)
{}
void ZmqPublisher::bindEndpoint(std::string ep)
{
m_zmqSocket.bind(ep.c_str());
}
void ZmqPublisher::connect(std::string ep)
{
m_zmqSocket.connect(ep.c_str());
}
So ultimately, my question is this: What is the proper way to handle multiple publishers on the same endpoint, and why am I not seeing messages from more than one publisher?
It may or may not be relevant, but The 0MQ Guide has the following slightly enigmatic remark:
In theory with ØMQ sockets, it does not matter which end connects and which end binds. However, in practice there are undocumented differences that I'll come to later. For now, bind the PUB and connect the SUB, unless your network design makes that impossible.
I've not yet discovered where the "come to later" actually happens, but I don't use pub/sub so much, and haven't read the "Advanced Pub-Sub Patterns" part of the guide in great detail.
However, the idea of multiple publishers on a single end-point, to me, suggests the need for an XPUB/XSUB style broker; it's not about dynamic discovery, it's about single point of contact and routing. Ultimately, I think a broker-based topology would simplify your application, and make it easier to identify problems.
Your mistake was that you call a single publisher with bind and others with connect. This is not supported with plain PUB-SUB pattern.
Plain PUB-SUB in ZeroMQ supports only two scenarios (see the image below):
single publisher, multiple subscribers
single subscriber, multiple publishers
In both cases, the party that is "single" must bind and the party that is "multiple" must connect. Otherwise, if you want many-to-many, you can use XPUB-XSUB or some other pattern.

Are delegation and websockets similar behind the scenes?

I'm just trying to figure out how these two things work. Obviously websocket's use push technology, so the client doesn't have to do long polling, or constantly refresh and check if something has changed (Kind of like an event listener).
But with delegation, like in objective C, are delegates constantly checking, by sending requests over and over again, to see if a method has been fired. Or is the information that a method has been fired PUSHed over to the delegates?
Or my third theory about delegates is, since they are of course in the same program, do the two classes (protocol and delegate class) always have an "open connection", kind of like Polling. Or is it like my second paragraph, where the information is truly being PUSHed.
WebSockets are a bi-directional full-duplex message based communication channel. Many push technologies can get low server to client (browser) latency, but with WebSockets you also get low client to server latency (and therefore low round-trip latency).
From my reading (I'm not an Objective-C expert), delegates are a just a way of creating a loose protocol (in the object sense, not in the network sense) between objects. I don't know the implementation details but I'm certain that there is no polling going on. The delegate methods are probably just looked up when needed. There is no need for an "open connection" or polling. Think of delegates as a way of doing function/method calls, not as a network transport (like WebSockets). This Apple doc goes into deeper detail.

Resources