Difference between MQTT over WebSocket and SSE(Server Send Event) - websocket

When publish/subscribe to messages directly from a web server to a web browser or vice versa we can use MQTT over WebSockets. At the same time, SSE(half duplex) can be used to push data from web server to web browser. What are the other major differences? Especially related security and consistency of the application.

WebSocket is a low-level (framing) transport standardized by the IETF and a JavaScript API standardized by the W3C. It is not publish/subscribe. You can have publish/subscribe protocols that sit "on top" of WebSocket. For example, AMQP is a pub/sub protocol that can be implemented with WebSocket. Another example is Java Message Service (JMS); while JMS is an API and not a bit protocol, it can be implemented over a pub/sub protocol that, in turn, is implemented with WS. I mention both AMQP and JMS because both the AMQP protocol and the JMS API provide for "acknowledgements", which will give you a high degree of reliability unlike other mechanisms.
MQTT is a publish/subscribe protocol that can be implemented over a low-level transport. MQTT can run over TCP/IP or WebSocket for example. MQTT has QoS levels which also give you acknowledgements (ie, for reliability). MQTT is not normally native to a browser, so MQTT messages have to be made web-friendly before connecting to a browser... usually WebSocket, since WS is a 'fat pipe' and similar to TCP in a way.
Server-Sent Events (SSE) is a HTML5 formalization of "Comet" (or "reverse AJAX) techniques. "Comet" was a loose collection of informal techniques; different implementations did not work together. SSE is not publish/subscribe. It is an HTTP mechanism to broadcast data from a server to the browser client(s). Essentially its a fire-and-forget technique.
Most modern browsers understand SSE and WS (IE/EDGE does not currently support SSE); they usually all understand Secure WebSocket (WSS) too. Practically all webservers and appservers understand SSE and WS/WSS. If you use WSS, your data will be encrypted in transit. The particular encryption cipher is setup on the connection; you'll have to investigate what ciphers your browser clients and web/app-servers understand.

MQTT offers 3 different QOS levels that control delivery of messages
QOS 0 - Best effort
QOS 1 - At least once
QOS 2 - Once only
MQTT supports User authentication and topic level ACL so you can ensure users only see what they need to see even when using wildcard subscriptions
MQTT also allows for direct connection to the backend systems without the need for bridging in the WebApp

Related

How to use Strimzi Kafka Bridge as a streaming service

Using CNCF's Strimzi Kafka Bridge I have created a small API that can interact with Kafka server using a HTTP/1.1 protocol. This is all good for a request-response scenario. However, my requirement is to stream events received on the Kafka topic to the subscribed client(s) (through the Strimzi bridge) as soon as I receive them preferably on a long lived HTTP connection (as per my understanding). It's a waste of client resources to continuously poll the bridge for messages and come back empty handed. I would like the Kafka server stream these events to the client directly.
I am a little unsure about SSE or Websockets or long polling. I did quite a bit of reading on these methodologies to stream data to the client. However, I am unable to figure out if these changes are at the communication or the application layer or both.
Do you just build an API (irrespective of the technology) using a traditional HTTP communication protocol and somehow upgrade it to use Websockets OR use of Websockets should be embedded in your application libraries ground up?
I can provide more information if needed. The Strimzi Kafka bridge website does not mention anything about "server side streaming" OR maybe I am misunderstanding the real purpose of the tool.
The Strimzi Kafka HTTP bridge is meant as a "translator" for HTTP to Kafka native protocol and vice versa. It means that the HTTP client has to have the same behavior as a native Kafka client so, in the case of a consumer, doing a poll for getting messages which is how Kafka works natively. Imho HTTP 1.1 is not for streaming at all.
Websockets is a completely different protocol to which you can upgrade of course starting from an HTTP connection but it's not supported by the Strimzi bridge.
Actually, the AMQP 1.0 protocol which is in the bridge (as a POC) can support this kind of scenario so establishing a connection and having the bridge pushing on that connection instead of polling from the client side.
#Nick thinking more, actually you can do "long polling". The GET on the /records endpoint for getting messages has a timeout parameter on the query string. Its value is used as timeout for the internal native Kafka poll in the bridge. It somehow provides you the long polling behaviour because the poll doesn't return until there are available records or the timeout expires. If you set a high timeout, you can have the behavior you want avoiding polling more times with opening/closing more HTTP connections for that.
More details on the timeout parameter here:
https://strimzi.io/docs/bridge/latest/#_poll

Websockets vs Reactive sockets

I have recently come across a term 'Reactive sockets'. Up until this point, I used to think websockets are the way to go for full fledged asynchronous style.
So what are reactive sockets.
This link (http://rsocket.io/) even talks about comparison over websockets.
What is RSocket?
RSocket implements the Reactive Streams specification over the network boundary. It is an application-level communication protocol with framing, session resumption, and backpressure built-in that works over the network.
RSocket is transport agnostic. RSocket can be run over Websockets, TCP, HTTP/2, and Aeron.
How does RSocket differ from Websockets?
Websockets do not provide application-level backpressure, only TCP-based byte-level backpressure. Websockets also only provide framing they do not provide application semantics. It is up to the developer to build out an application protocol for interacting with the websocket.
RSocket provides framing, application semantics, application-level backpressure, and it is not tied to a specific transport.
For more information on the motivations behind the creation of RSocket checkout the motivations doc on the RSocket site.
Both WebSocket and RSocket are protocols which feature bi-directional, multiplex, duplex communication. But both work at different levels.
WebSocket is a low level communication protocol layered over TCP. It defines how a stream of bytes is transformed into frames. But WebSocket message itself does not have instructions about how to route or process it. Therefore, we need messaging protocols that operate on top of websocket, at application level, to achieve two-way communication.
RSocket is a fully reactive application level protocol which runs over byte stream transports such as TCP, WebSocket, UDP or other. It provides application flow control over the network to prevent outages and increase resiliency. RSocket employs the idea of asynchronous stream processing with non-blocking back-pressure, in which a failing component will, rather than simply dropping traffic, communicate its stress to upstream components, getting them to reduce the load.
Websocket
TLDR: L4 protocol, TCP for web.
Websocket is single bytestream, frames based protocol with very compact header.
It relies on web/http in each important protocol aspect: http based handshake (full roundtrip, not ideal for latency), text frames in addition to binary ones, compression support (for low-throughput/high latency connections),
mandatory frame content masking for compatibility with legacy non-tls http proxies.
Frames may be fragmented for better memory utilization by client and server;
Flow control is byte level only from TCP, and is not propagated to userspace.
Because websocket is just single bytestream transport, It needs full application protocol on top to be useful, and application level flow control scheme to be scalable.
Adoption wise, stable websocket implementation is available for most OSes / architectures, protocol is supported by all browsers and is go-to solution if any traffic needs to survive internet hop.
RSocket. Theory
TLDR: L5 protocol, primarily cloud/datacenter communications with excellent
throughput/latency characteristic: huge throughput while maintaining latency < few millis.
RSocket is session layer protocol, offers multiplexed flow controlled streams of binary messages over any transport capable of transferring bytes in order (tcp, unix sockets, also websocket).
Low latency is cornerstone, protocol has several capabilities for this:
2 levels of flow control: Reactive-Streams on individual stream level,
request leasing on connection level. Leasing is feature to control number of active streams by responder side using service & connection latency stats.
Instant handshake: client may send requests immediately after initial setup message.
Message fragmentation helps with reducing server memory pressure and improves latency for large messages (if done properly, see RSocket. Practice below).
Session resumption: reduced latency on client reconnection.
Because binary streaming interactions / multiplexing are available out-of-the-box, It is trivial to implement application RPC on top - only data serialization/deserialization is needed (mstreams-rpc using protobuf data encoding).
The protocol is semantically compatible with http2, which means It is also compatible with GRPC (given protobuf is used for message encoding).
RSocket. Practice
Only useful on JVM because that's where reactive streams are popular and practically useful with several
stable implementations: rxjava, project-reactor, smallrye-mutiny.
RSocket/RSocket-java is based on project-reactor from springboot.
Natural expectation would be best-in-class throughput, unfortunately RSocket/RSocket-java did not get this
right so performs worse
than 10+ year older GRPC (its predecessor Stubby was in use from ~2001) on top of http2: chatty web protocol.
Fragmentation: no server memory use or latency improvement because RSocket/RSocket-java
implemented It in pointless way - frames are always reassembled before passing
downstream.
GRPC compatibility: absent.
Advice for 2022: better stick with GRPC.

What is the difference between WebSocket and STOMP protocols?

What are the major differences between WebSocket and STOMP protocols?
This question is similar to asking the difference between TCP and HTTP. I shall still try to address your question, its natural to get confused between these two terms if you are beginning.
Short Answer
STOMP is derived on top of WebSockets. STOMP just mentions a few specific ways on how the message frames are exchanged between the client and the server using WebSockets.
Long Answer
WebSockets
It is a specification to allow asynchronous bidirectional communication between a client and a server. While similar to TCP sockets, it is a protocol that operates as an upgraded HTTP connection, exchanging variable-length frames between the two parties, instead of a stream.
STOMP
It defines a protocol for clients and servers to communicate with messaging semantics. It does not define any implementation details, but rather addresses an easy-to-implement wire protocol for messaging integrations. It provides higher semantics on top of the WebSockets protocol and defines a handful of frame types that are mapped onto WebSockets frames. Some of these types are...
connect
subscribe
unsubscribe
send (messages sent to the server)
message (for messages send from the server) BEGIN, COMMIT, ROLLBACK
(transaction management)
WebSocket does imply a messaging architecture but does not mandate the use of any specific messaging protocol. It is a very thin layer over TCP that transforms a stream of bytes into a stream of messages (either text or binary) and not much more. It is up to applications to interpret the meaning of a message.
Unlike HTTP, which is an application-level protocol, in the WebSocket protocol there is simply not enough information in an incoming message for a framework or container to know how to route it or process it. Therefore WebSocket is arguably too low level for anything but a very trivial application. It can be done, but it will likely lead to creating a framework on top. This is comparable to how most web applications today are written using a web framework rather than the Servlet API alone.
For this reason the WebSocket RFC defines the use of sub-protocols. During the handshake, the client and server can use the header Sec-WebSocket-Protocol to agree on a sub-protocol, i.e. a higher, application-level protocol to use. The use of a sub-protocol is not required, but even if not used, applications will still need to choose a message format that both the client and server can understand. That format can be custom, framework-specific, or a standard messaging protocol.
STOMP — a simple, messaging protocol originally created for use in scripting languages with frames inspired by HTTP. STOMP is widely supported and well suited for use over WebSocket and over the web.
The WebSocket API enables web applications to handle bidirectional communications whereas STOMP is a simple text-orientated messaging protocol. A Bidirectional WebSocket allows a web server to initiate a new message to a client, rather than wait for the client to request updates. The message could be in any protocol that the client and server agree to.
The STOMP protocol is commonly used inside a web socket.
A good tutorial is STOMP Over WebSocket by Jeff Mesnill (2012)
STOMP can also be used without a websocket, e.g. over a Telnet connection or a message broking service.
And Raw WebSockets can be used without STOMP - Eg. Spring Boot + WebSocket example without STOMP and SockJs.
Note: Others have well explained what are both WebSocket and STOMP, so I'll try to add the missing bits.
The WebSocket protocol defines two types of messages (text and binary), but their content is undefined.
STOMP protocol defines a mechanism for client and server to negotiate a sub-protocol (that is, a higher-level messaging protocol) to use on top of WebSocket to define following things:
what kind of messages each can send,
what the format is,
the content of each message, and so on.
The use of a sub-protocol is optional but, either way, the client and the server need to agree on some protocol that defines message content.
Reference
TLDR; STOMP is a framework built on top of websockets, i.e. stomp utilizes websockets in the background. If you are thinking of building a notification/messaging system then use stomp.
https://stomp.github.io/stomp-specification-1.2.html

Why is HTTP + Web Sockets not suitable as a messaging protocal?

I've read that HTTP is not suitable as a messaging protocol in several places such as here in reference to RabbitMQ.
I assume that there's a technical reason for this and that it's not a mere opinion. I've looked through the AMQP spec for example and can't see any reason why HTTP + Web Sockets can't work. In fact, something seems to be in the works for AMQP over Web Sockets. Furthermore, I've looked at the STOMP protocol which does use HTTP + Web Sockets and can't see any significant limitations (other than a small performance hit).
What technical characteristic does HTTP + Web Sockets lack that makes it unsuitable as a messaging protocol?
UPDATE:
This is what I was looking for: Crossbar.IO - a WAMP message broker. I needed a message broker that I can easily connect to from a browser and have not been satisfied with RabbitMQ (over STOMP) or HiveMQ (MQTT).
HTTP is request/response based, what makes it difficult to work in a publisher/subscriber fashion. Basically, you can either poll the source of messages for new ones, or create another local endpoint where the other end push messages to you.
WebSocket is different. Despite of starting as a HTTP request, it switches straightaway to a persistent, full-duplex connection, where both end can push data. Basically, in this case HTTP is only used as protocol to negotiate the connection, once negotiated WebSocket uses its own protocol to transfer data.
UPDATE: We are clear that HTTP is not a messaging protocol, since it is request/response. WebSockets, although it allows pushing data from both ends, it is not a messaging protocol neither. It defines a way of framing data, but there are not defined semantic or grammar to subscribe to topics or any operation about messaging. For example WAMP is an actual messaging protocol for websockets.

Does WAMP messaging have to route messages through a broker?

I've been reviewing Websockets messaging protocols. Looking at WAMP, it has the basic features I want. But in reading the docs, it appears that it requires a messages to route through the broker. Is this correct?
I am looking for real time messaging. While the broker role may be useful as a means of bringing together the publishers and subscribers, I would want the broker to only negotiate the connection, then hand-off sockets/IPs to the parties - allowing direct routing between the involved parties without requiring the broker to manage all the real time messaging. Can WAMP do this?
Two WebSocket clients (e.g. browsers) cannot talk directly to each other. So there will be an intermediary involved in any case.
WAMP is for real-time messaging. To be precise, WebSocket is for soft real-time. There are no hard real-time guarantees in any TCP based protocol running over networks.
Regarding publish & subscribe: a broker is always required, since it is exactly this part which decouples publishers and subscribers. If publisher would be directly connected to subscribers (not possible with 2 WebSocket client anyway, but ..), then you would introduce coupling. But decoupling a main point of doing PubSub anyway.
What exactly is your concern with having a broker (PubSub) or a dealer (RPC) being involved? Latency?

Resources