I have been toying with the idea of using gRPC for 'IoT' type devices; not very constrained things like sensors; more like single board computer inbuilt devices like robots, drones and the like. What is needed a interface between device and centralized controller as the devices are developed separately and possible by other companies. So a versioned interface language is a must; and it should not be just in a word document; something programmable like a header file or WSDL or IDL or ProtocolBuffer. Also between device and controller the behavior should be specified for common use case like registration, re-registration etc.This can be in word file or in some non technical document.
The (rpc) interface specification in Protocol Buffer (ver 3) along with the efficient implementation via gRPC is leading me to choose this over CoAP/ LWM2M (Leshan Java and C++ implementation).
Having used both LWM2M and grPC, I would say that gRPC is more developer friendly; interface definition and implementation is fast,compared to OMA LWM2M process.Of course there is no Observer-Notify in gRPC, but for that MQTT should suffice.
Strictly one cannot compare LWM2M to gRPC. LWM2M is not just the interface but defines behavior in lot of IoT cases like BootStrap, Registration, KeepAlive , SW Upgrade etc and its universal HTTP like GET, PUT on an URL type addressable resource makes it very complete. However most of these behaviors can be custom defined with some effort.
Some of the IoT things which we plan to orchestrate are far from little brained devices like bulbs, and more like robots. Has anyone used gRPC for similar purposes. Any success of failure stories to share
The thing I would miss in gRPC for IoT are the MQTT MQ capabilities like queueing of messages, broker bridging QoS Parameter. Or for CoAP the Out of Band messages over SMS Transport. In this context gRPC is "just" an RPC framework which assumes to be always connected over TCP.
We were thinking in the same solution as Joe's, which includes CoAP + Protobuf:
Protobuf & its IDL are a really helpful way to define and govern our APIs in a centralized way. Google's api-linter as well as AIPs are high quality resources for keeping everything sane.
CoAP for transport: Lightweight protocol for IoT devices. It's available in practically every RTOS/embedded/platform and it's adapted to the low bandwitdh we usually face in this world. Being the protocol chosen by Thread and the support Project Connected Home Over IP is giving to it does also help.
Use Protobuf for the messages no matter they are Request/Response or even event messages pushed by the IoT device. An already solved problem by nanopb which is a protobuf wire compatible C code generator for embedded systems.
Generate our own stubs based on the IDL to create our own wrapper over CoAP and nanopb code. Unary calls are supported and also server streaming by leveraging this to the CoAP observable mechanisms.
I have taken the plunge and used it in a project with connected 'Devices'; These are small computers like Raspberry-pi. Overall it has been good experience; and languages used are C++ and Java mainly and also JavaScript in Node.js . We have used this as Dockerized microservices; Load Balancing is what we have not done; and I read that HTTP/2 based load balancing is tricky; Will update that part; planning to use Kubernetes for that. Overall Container technology with versioned interfaces - GRPC seems like a good fit for (micro) services
I use an esp32 and R_Pi with CoAP and protobufs. As far as I know gRPC is not supported for esp32/8266. I'm pretty happy with it, but didn't do any concrete testing against lwm2m. Implementation is here
Related
I currently have a primitive RPC setup relying on JSON transferred over secured sockets, but I would like to switch to gRPC. Unfortunately I also need access to AF_UNIX on windows (Which Microsoft recently started supporting, but gRPC has not implemented).
Since I have an existing working connection (managed with a different library), my preference would be to just use that in conjunction with GRPC to send/receive commands in place of my JSON parsing, but I am struggling to identify the best way to do that.
I have seen Plugging custom transport into gRPC but this question differs in the following ways (As well as my hope for a more recent answer)
I am wanting to avoid making changes to the core of gRPC. I'd prefer to extend it if possible from within my library, but the answer here implies adding a new transport to gRPC.If I did need to do this at the transport level, is there a mechanism to register it with gRPC after the core has been built?
I am unsure if I need to define this as a full custom transport, since I do already have an existing connection established and ready. I have seen some things that imply I could simply extend Channel, but I might be wrong.
I need to be able to support Windows, or at least modern versions of it (Which means that the from_fd options gRPC provides are not available since they are currently only implemented for POSIX)
Has anyone solved similar problems with gRPC?
I may have figured out my own answer. I seem to have been overly focused on gRPC, when the service definition component of Protobuf is not dependent on that.
How can i write my own RPC Implementation for Protocol Buffers utilizing ZeroMQ is very similar to my use case, with https://developers.google.com/protocol-buffers/docs/proto#services seeming to resolve my issue (And this also explains why I seem to have been mixing up the different kinds of "Channels" involved
I welcome any improvements/suggestions, and hope that maybe this can be found in future searches by people that had the same confusion.
I'm working on a project that has multiple microservices behind a API Gateway and some of them expose WebSockets API.
The WebApp needs to be able to interact with those APIs.
.
Those WebSocket API can be built with frameworks that have their own protocols, using socket.io or not etc.
The main goal of this reflexion is to be able to scale and keep flexibility on my WebSockets APIs implementation.
I thought about two solutions :
The first one is to simply proxy requests on the gateway, the webapp will have to open a websocket for each microservice, this looks like a design flaw to me.
The other one is to create a "Notifier Service" that will be a WebSocket Server and that will keep outgoing connections with users and be able to bridge the incomming messages and outgoing ones based on a custom protocol. The drawback is that i need to implement a pub/sub system (or find a solution for). I didn't dig a lot into it but it looks like a lot of work and a homemade solution, i'm not a fan of it.
I didn't find articles that give feedback after exposing such an architecture and websockets in production, i was hoping to find some here.
I agree with you that a simple proxy solution seems to be inadequate as it exposes the internal interfaces to the clients. I found two articles which I think are addressing your problem:
The API Gateway Pattern
Pattern: API Gateway / Backends for Frontends
Both talk about the issues that you have already mentioned in your question: Protocol translation is an advantage of this architecture as it decouples the external API from the protocols internally used. On the other hand, increased complexity due to having another component to be maintained is mentioned as a drawback.
The second article also suggests some existing libraries (namley Netty, Spring Reactor, NodeJS) that can be used to implement an API gateway, so you might want to spend some time evaluating these for your project.
I'm working on IoT project where I receive datastream in CoAP protocol.
I want to process the data in Heron by doing some transformations on top of it.
Is it possible to integrate CoAP protocol objects to heron?
I think it is not very important from the CoAP endpoint's view where do you put the received data.
Use this link as a initial point:
http://coap.technology/impls.html
There you can find brief descriptions for implementations for several languages/platforms.
Unfortunately, I am not familiar with Twitter Heron and don't know which language is best for implementing a Heron data provider.
If such language is Java or Heron is language-agnostic (say, has a REST API as a primary interface) - I'd consider the https://eclipse.org/californium/ as a very mature implementation. That way (sure in 5000ft view as I don't know the details) you could write an app which uses Californium and CoapHandlers might push data to Heron.
Socket.IO seems to be the most popular and active WebSocket emulation library. Juggernaut uses it to create a complete pub/sub system.
Faye is also popular and active, and has its own javascript library, making its complete functionality comparable to Juggernaut. Juggernaut uses node for its server, and Faye can use either node or rack. Juggernaut uses Redis for persistence (correction: it uses Redis for pub/sub), and Faye only keeps state in memory.
Is everything above accurate?
Faye says it implements Bayeux -- i think Juggernaut does not do this -- is that because Juggernaut is lower level (IE, I can implement Bayeux using Juggernaut)
Could Faye switch to using the Socket.IO browser javascript library if it wanted to? Or do their javascript libraries do fundamentally different things?
Are there any other architectural/design/philosophy differences between the projects?
Disclosure: I am the author of Faye.
Regarding Faye, everything you've said is true.
Faye implements most of Bayeux, the only thing missing right now is service channels, which I've yet to be convinced of the usefulness of. In particular Faye is designed to be compatible with the CometD reference implementation of Bayeux, which has a large bearing on the following.
Conceptually, yes: Faye could use Socket.IO. In practise, there are some barriers to this:
I've no idea what kind of server-side support Socket.IO requires, and the requirement that the Faye client (there are server-side clients in Node and Ruby, remember) be able to talk to any Bayeux server (and the Faye server to any Bayeux client) may be deal-breaker.
Bayeux has specific requirements that servers and clients support certain transport types, and says how to negotiate which one to use. It also specifies how they are used, for example how the Content-Type of an XHR request affects how its content is interpreted.
For some types of error handling I need direct access to the transport, for example resending messages when a client reconnects after a Node WebSocket dies.
Please correct me if I've got any of this wrong - this is based on a cursory scan of the Socket.IO documentation.
Faye is just pub/sub, it's just based on a slightly more complex protocol and has a lot of niceties built in:
Server- and client-side extensions
Wildcard pattern-matching on channel routes
Automatic reconnection, e.g. when WebSockets die or the server goes offline
The client works in all browsers, on phones, and server-side on Node and Ruby
Faye probably looks a lot more complex compared to Juggernaut because Juggernaut delegates more, e.g. it delegates transport negotiation to Socket.IO and message routing to Redis. These are both fine decisions, but my decision to use Bayeux means I have to do more work myself.
As for design philosophy, Faye's overriding goal is that it should work everywhere the Web is available and should be absolutely trivial to get going with. I'ts really simple to get started with but its extensibility means it can be customized in quite powerful ways, for example you can turn it into a server-to-client push service (i.e. stop arbitrary clients pushing to it) by adding authentication extensions.
There is also work underway to make it more flexible on the server side. I'm looking at adding clustering support, and making the core pub-sub engine pluggable so you could use Faye as a stateless web frontend for another pub-sub system like Redis or AMQP.
I hope this has been helpful.
AFAIK, yes, apart from the fact Juggernaut only uses Redis for Pubsub, not persistence. Also means client libraries in most languages have already been written (since it just needs a Redis adapter).
Juggernaut doesn't implement Bayeux, but rather has a very simple custom JSON protocol
I don't know, but probably
Juggernaut is very simple, and designed to be that way. Although I haven't used Faye, from the docs it looks like it has a lot more features than just PubSub. Being built on top of Socket.IO has it advantages too, Juggernaut's supported in practically every browser, both desktop and mobile.
I'll be really interested in what Faye's author has to say. As I say, I haven't used it and it would be great to know how it compares to Juggernaut. It's probably the case of using the best tool for the job. If it's pubsub you need, Juggernaut does that very well.
Faye certainly could.
Another example of a similar project on top of Socket.IO:
https://github.com/aaronblohowiak/Push-It
EDIT: I forgot to include the prime candidate for web applications: JSON over HTTP/REST + Comet. It combines the best features of the others (below)
Persevere basically bundles everything I need in a server
The focus for Java and such is definitely on Comet servers, but it can't be too hard to use/write a client.
I'm embarking on an application with a server holding data, and clients executing operations which would affect this data, and thus require some sort of notification across all interested/subscribed clients.
The first client will probably be written in WPF, but we'll probably need to add clients written in other languages, e.g. a Java (Swing?) client, and possibly, a web client.
The actual question(s): What protocol should I use to implement this? How easy would it be to integrate with JS, Java and .NET (precisely, C#) clients?
I could use several interfaces/protocols, but it'd be easier overall to use one that is interoperable. Given that interoperability is important, I have researched a few options:
JSON-RPC
lightweight
supports notifications
The only .NET lib I could find, Jayrock doesn't support notifications
works well with JS
also true of XML-based stuff (and possibly, even binary protocols) BUT this would probably be more efficient, thanks to native support
Protobuf/Thrift
IDL makes it easy to spit out model classes in each language
doesn't seem to support notifications
Thrift comes with RPC out of the box, but protobufs don't
not sure about JS
XML-RPC
simple enough, but doesn't support notifications
SOAP: I'm not even sure about this one; I haven't grokked this yet.
seems rather complex
Message Queues/PubSub approach: Not strictly a protocol, but might be fitting
I hardly know anything about them, and got lost amongst the buzzwords`-- JMS? **MQ?
Perhaps combined with some RPC mechanism above, although that might not be strictly necessary, and possibly, overkill.
Other options are, of course, welcome.
I am partial to the pub/sub design you've suggested. I'd take a look at ZeroMQ. It has bindings to C#, Java, and many other platforms.
Bindings list: http://www.zeromq.org/bindings:clr
I also found this conversation on the ZeroMQ dev listing that may answer some questions you have about multiple clients and ZeroMQ: http://lists.zeromq.org/pipermail/zeromq-dev/2010-February/002146.html
As XMPP was mentioned, SIP has a similar functionality. This might be more accessible for you.
We use Servoy for this. It does automatic data broadcasting to web-clients and java-clients. I'm not sure if broadcasts can be sent to other platforms, you might be able to find an answer to that on their forum.
If you want to easily publish events to clients across networks, you may wish to look at a the XMPP standard. (Used by, amongst other things, Jabber and Google Talk.)
See the extension for publish-subscribe functionality.
There are a number of libraries in different languages including C#, Java and Javascript.
You can use SOAP over HTTP to modify the data on the server and SOAP over SMTP to notify the subscribed clients.
OR
The server doesn't know anything about the subscription and the clients call the server by timeout to track updates they are interested in, using XML-RPC, SOAP (generated using WSDL), or simply HTTP GET if there is no need to pass back complex data on tracking.