Can lightweight client implementation of HTTP2 ignore Stream Priority? - http2

Is Stream Priority is an important HTTP2 feature that client implementation should be aware of and obey?
Can client just ignore priorities, ie never create dependent streams and never interpret priority of server streams?
For example Dynamic Table of HPACK feature can be disabled as easily as by specifying SETTINGS_HEADER_TABLE_SIZE as 0 in the setting frame.

Yes. Priority is a handy feature of HTTP/2, but it is not mandatory.
HTTP/2 dynamic priorities are a best-effort feature, because the server may receive the priority packets when it is already too late to satisfy the browser's demands.

Related

MQTT vs Socket.IO on Network bandwidth usage

I need solution upstream much data every seconds.
200kBytes per seconds via wireless (WiFi) or Ethenet.
I selected MQTT, because It is useful for connections with remote locations where a small code footprint is required and/or network bandwidth is at a premium.
MQTT is better that Socket.io in network bandwidth usage?
Or, MQTT is good solution for upload/publish realtime.
MQTT can be used for charting system same as socket.io(WebSocket)?
Socket.io does several things at once. This answer focuses on your note about the underlying protocol, WebSockets, though of course you could use those without Socket.io.
WebSockets vs. MQTT is an apples-to-baskets comparison, as each can work without the other or together. MQTT can work alone as an alternative to HTTP. WebSockets is an additional protocol on top of HTTP, and can keep a long-running connection open, so that a stream of messages can be sent over a long period without having to set up a new connection for each request. That connection can carry MQTT, or non-MQTT data like JSON objects, and has the benefit of providing a reliable two-way link whose messages arrive in order.
MQTT also has less overhead, for different reasons: it is designed with a publish-subscribe model (Pub-Sub Model) and optimizes for delivering data over narrow, slow, or unreliable connections. Though it omits many of the headers that accompany an HTTP message in favor of a few densely-coded bytes, the real difference is in speed of delivery. A top option for constrained embedded devices, though they are usually sending small messages and trying to conserve data/processing/power.
So they have different strengths, and can even be combined. MQTT-via-WebSockets is a common approach for using MQTT inside a webapp, though plain MQTT is the norm in lower-end devices (which may be hard-pressed to send that much data anyway). I suggest MQTT for sending from device to server, or WebSockets-MQTT for quickly receiving device data in the browser or ensuring the order of messages that are sent at high rates. An important exception would be for streaming - there have only been isolated reports of it over MQTT, while Socket.io reports it as a top feature. The balance will depend on what systems you have on both ends and what sort of charting is involved.

What STOMP Header to Close a Subscription?

Is there a standard way for a server to indicate to a STOMP (web socket) client that it should close the connection? (i.e. 'kick' them from a room).
From what I can tell, there is:
a standard frame for a client to UNSUBSCRIBE
no frame for kicking or force "unsubscribing" from a topic, at the server's initiative.
Use cases include: closing a topic when all other members have unsubscribed, closing a topic when its temporary use has ended (e.g. fetching a large document).
The STOMP specification has no way to tell a client it needs to unsubscribe from a destination. In my view, if the server deems it necessary that the client should unsubscribe the server should simply disconnect the subscriber and perform the necessary server-side clean-up. STOMP supports certain "server" frames (i.e. MESSAGE, RECEIPT, & ERROR). An ERROR frame might be suitable here. Such a frame could include details about why the client was disconnected.
Also, it's worth noting that STOMP only specifies support for generic destinations without regard for delivery semantics so speaking about a STOMP "topic" isn't technically accurate. Of course, implementors are free to provide the kinds of delivery semantics they want and if those semantics fit with traditional "topic" (i.e. publish/subscribe) semantics that's certainly permissible.

Multicast Message queue

I am sending out large streams of video to many machines on an internal network.
I would like to use a message queue, but I cannot afford to uni-cast copies of the video to each of the machines.
Is there any message queue that implements fan-out [send multiple copies of message to several machines] via multicast?
Since this is video, creating several unicast streams is out of the question.
Video streaming via high level messaging technologies is probably a really bad idea in the first place. Why would you need messaging in the first place? What features do you need?
An IP multicast would disable most features of a messaging system since every message would be delivered at the same time.
Publish/subscribe is probably the closest you get to multicast on high level MOMs (RabbitMQ,ActiveMQ or other amqp/jms compliant suites). But I doubt it would be usable for video in most cases.
ZeroMQ is a low level messaging mechanism, closer to the wire - but without much of the high level features of MOM software. It supports multicast etc. Messaging systems using MQTT might be light weight enough to transport large amounts of video as well.
Not sure about JMS for this, but you might want to look at Netty's RTSP protocol implementation.
Rtsp:http://www.ietf.org/rfc/rfc2326.txt
Netty:http://netty.io/4.0/api/io/netty/handler/codec/rtsp/package-summary.html

Spring Integration JMS Outbound adapter transaction control

in order to reach high performance production of messages with jms with transactions enabled, one needs to control the amount of messages being sent on each transaction, the larger the number the higher the performance are,
is it possible to control transactions in such a way using spring integration ?
one might suggest using an aggregator, but that defeats the purpose because i dont want to have one message containing X smaller messages on the queue, but actually X messages on my queue..
Thanks !
I'm not aware of your setup, but I'd bump up the concurrent consumers on the source than try to tweak the outbound adapter. What kind of data source is pumping in this volume of data ? From my experience, usually the producer lags behind the publisher - unless both are JMS / messaging resources - like in the case of a bridge. In which case you will mostly see a significant improvement by bumping up the concurrent consumers, because you are dedicating n threads to receive messages and process them in parallel, and each thread will be running in its own "transaction environment".
It's also worthwhile to note that JMS does not specify a transport mechanism, and its unto the broker to choose the transport. If you are using activemq you can try experimenting with open wire vs amqp and see if you get the desired throughput.

Messaging system design

I am looking for a way to send requests and receive call backs from another party.
The only gotcha is that we do not now how it will be designed/deployed on the receiver side.
We do have the text/JSON based messages defined and agreed upon.
Looked at RabbitMQ and others, but each requires a server that would need to be maintained.
Thanks,
RabbitMQ is pretty easy to maintain. You would use two queues, one for requests and the other for replies. Use the AMQP correlation_id header to tag requests and replies so that when a reply message is received it can be matched with the orginal request.
However, if a broker is not for you, then use ZeroMQ. It is a client library available for a dozen or more languages and it enforces messaging patterns over top of sockets. This means that your app does not have to do all the low level socket management. Instead you declare the socket as REQ/REP and ZeroMQ handles all the rest. You just send messages in any format you desire, and you get messages back.
I've used ZeroMQ to implement a memcache style application in Python using REQ/REP.
#user821692: You have to agree not only message format but also destination/transport protocol. For e.g. if both communicating parties has access to same queue physically located anywhere, then they can communicate pre-defined messages. You may also look of sending messages over HTTP..

Resources