Enterprise Messaging API with Web Services for High Performance? - performance

Does combining an Enterprise Messaging solution with Web Services result in a real performance gain over simple HTTP requests over sockets?
(if implementation details will help, interested in JMS with a SOAP webservice)

as always, it depends. If you are sending xml documents over your socket using the http-protocol, then no.. your performance will be roughly the same as the enterprise frameworks (because, web services are effectively just that, data encoded in the soap protocol, transmitted over the http protocol over a socket).
If you are sending a more lightweight data stream over a socket, then you will probably get better performance.
Ultimately, it depends on what you're sending, how much of it there is, and how often you're sending it.

Typically one uses a messaging solution for message reliability, rather than performance. If you need guaranteed message delivery, use something like JMS.
HTTP is so lightweight, I can't imagine that any other messaging solution would have higher performance.

Related

Bidirectional client-server communication using Server-Sent Events instead of WebSockets?

It is possible to achieve two-way communication between a client and server using Server Sent Events (SSE) if the clients send messages using HTTP POST and receive messages asynchronously using SSE.
It has been mentioned here that SSE with AJAX would have higher round-trip latency and higher client->server bandwidth since an HTTP request includes headers and that websockets are better in this case, however isn't it advantageous for SSE that they can be used for consistent data compression, since websockets' permessage-deflate supports selective compression, meaning some messages might be compressed while others aren't compressed
Your best bet in this scenario would be to use a WebSockets server because building a WS implementation from scratch is not only time-consuming but the fact that it has already been solved makes it useless. As you've tagged Socket.io, that's a good option to get started. It's an open source tool and easy to use and follow from the documentation.
However, since it is open-source, it doesn't provide some functionality that is critical when you want to stream data in a production level application. There are issues like scalability, interoperability (for endpoints operating on protocols other than WebSockets), fault tolerance, ensuring reliable message ordering, etc.
The real-time messaging infrastructure plus these critical production level features mentioned above are provided as a service called a 'Data Stream Network'. There are a couple of companies providing this, such as Ably, PubNub, etc.
I've extensively worked with Ably so comfortable to share an example in Node.js that uses Ably:
var Ably = require('ably');
var realtime = new Ably.Realtime('YOUR-API-KEY');
var channel = realtime.channels.get('data-stream-a');
//subscribe on devices or database
channel.subscribe(function(message) {
console.log("Received: " message.data);
});
//publish from Server A
channel.publish("example", "message data");
You can create a free account to get an API key with 3m free messages per month, should be enough for trying it out properly afaik.
There's also a concept of Reactor functions, which is essentially invoking serverless functions in realtime on AWS, Azure, Gcloud, etc. You can place a database on one side too and log data as it arrives. Pasting this image found on Ably's website for context:
Hope this helps!
Yes, it's possible.
You can have more than 1 parallel HTTP connection open, so there's nothing stopping you.

What's an acceptable latency for service/message bus

I'm developing a simple RPC style message bus where microservices will live on different virtualized machines.
I'm just testing a simple proof of concept using c4.large instances on EC2 for RabbitMQ, the server and the client.
I'm noticing round trips to the server and back are ~100ms with ~20ms for connecting to the amqp server and another ~80ms for returning a simple string.
This seems quite high to have an overhead of 100ms for each RPC request. Is there a typical acceptable latency for this style of architecture? Should I be looking at different tools?
A message bus is typically used in applications to support asynchronous processing. A very simple example of this would be sending emails in response to a state change that happened in the application.
In this regard, 100ms is quite fast.
If you're trying to keep synchronous operations in your application fast, you won't get happy with making a message bus part of it.
Note that the above statement refers to external message buses. In-process message delivery mechanisms can be built with much less latency, but this is probably not what you need in the context of a microservices architecture.
Should I be looking at different tools?
No, you have appropriate tools for a microservices architecture. But you should ask yourself the following questions:
Is a microservices architecture the right choice for my application?
If yes, do I have suitable service boundaries?

Should we prefer SSE + REST over websocket when using HTTP/2?

When using websocket, we need a dedicated connection for bidirectionnel communication. If we use http/2 we have a second connection maintained by the server.
In that case, using websocket seems to introduce an unecessary overhead because with SSE and regular http request we can have the advantage of bidirectionnal communication over a single HTTP/2 connection.
What do you think?
Using 2 streams in one multiplexed HTTP/2 TCP connection (one stream for server-to-client communication - Server Sent Events (SSE), and one stream for client-to-server communication and normal HTTP communication) versus using 2 TCP connections (one for normal HTTP communication and one for WebSocket) is not easy to compare.
Probably the mileage will vary depending on applications.
Overhead ? Well, certainly the number of connections doubles up.
However, WebSocket can compress messages, while SSE cannot.
Flexibility ? If the connections are separated, they can use different encryptions. HTTP/2 typically requires very strong encryption, which may limit performance.
On the other hand, WebSocket does not require TLS.
Does clear-text WebSocket work in mobile networks ? In the experience I have, it depends. Antiviruses, application firewalls, mobile operators may limit WebSocket traffic, or make it less reliable, depending on the country you operate.
API availability ? WebSocket is a wider deployed and recognized standard; for example in Java there is an official API (javax.websocket) and another is coming up (java.net.websocket).
I think SSE is a technically inferior solution for bidirectional web communication and as a technology it did not become very popular (no standard APIs, no books, etc - in comparison with WebSocket).
I would not be surprised if it gets dropped from HTML5, and I would not miss it, despite being one of the first to implement it in Jetty.
Depending on what you are interested in, you have to do your benchmarks or evaluate the technology for your particular case.
From the perspective of a web developer, the difference between Websockets and a REST interface is semantics. REST uses a request/response model where every message from the server is the response to a message from the client. WebSockets, on the other hand, allow both the server and the client to push messages at any time without any relation to a previous request.
Which technique to use depends on what makes more sense in the context of your application. Sure, you can use some tricks to simulate the behavior of one technology with the other, but it is usually preferably to use the one which fits your communication model better when used by-the-book.
Server-sent events are a rather new technology which isn't yet supported by all major browsers, so it is not yet an option for a serious web application.
It depends a lot on what kind of application you want to implement. WebSocket is more suitable if you really need a bidirectional communication between server and client, but you will have to implement all the communication protocol and it might not be well supported by all IT infrastructures (some firewall, proxy or load balancers may not support WebSockets). So if you do not need a 100% bidirectional link, I would advise to use SSE with REST requests for additional information from client to server.
But on the other hand, SSE comes with certain caveats, like for instance in Javascript implementation, you can not overwrite headers. The only solution is to pass query parameters, but then you can face an issue with the query string size limit.
So, again, choosing between SSE and WebSockets really depends on the kind of application you need to implement.
A few months ago, I had written a blog post that may give you some information: http://streamdata.io/blog/push-sse-vs-websockets/. Although at that time we didn't consider HTTP2, this can help know what question you need to ask yourself.

What are the advantages of websocket APIs to middleware?

Some pieces of middleware support websockets natively e.g. HiveMQ: http://www.hivemq.com/mqtt-over-websockets-with-hivemq/. What advantages are conferred to a developer using the websockets API as a first class client to the middleware, rather than routing requests through an intermediary server that supports language specific APIs e.g.
Client -> Middleware
vs
Client -> Server -> Middleware
For example, we could argue that skipping an intermediary server will reduce bandwidth costs, not require a developer to write an extra layer, native SSL websockets support?
What other advantages might be provided to not just a developer, but any party through providing websockets support for middleware?
The main advantage you get is simplicity and in case of HiveMQ, scalability.
Let me explain these advantages:
Simplicity
In case of HiveMQ, you just start the server and you are good to go. All web applications which use a MQTT library over websockets can connect to the server without even knowing that websockets as transport is used. For HiveMQ itself, it's just another MQTT client. So it doesn't matter if the clients are connected via websockets or via a classic TCP connection. I think you already mentioned the other arguments in your question. And of course last but not least the operations guys will thank you if they have one system (in your case the "Server") less to maintain.
Scalability
Software like HiveMQ is very scalable and it can handle up to hundreds of thousands of concurrent connected clients. The chance is high, that the additional layer ("Server" in your case) could introduce a bottleneck. Also, things like load balancing with a HW or SW load balancer gets a lot easier if you can throw out unneeded layers. In general, your architecture of your system will get a lot of easier if you don't need these additional layers (which are not services which can be reused for other applications, like microservices are).
Last but not least it's worth noting, that HiveMQ itself is often integrated with classic middleware / ESBs. That means, people write custom plugins for integrating HiveMQ to their existing middleware. JMS or webservice calls (REST, SOAP) are often used for doing that.
Take that answer with a grain of salt, since I'm involved developing HiveMQ :-)

SignalR Method Calls - Faster than Conventional AJAX Calls?

If my web application has a number of regular AJAX methods in it, but I've introduced an always-on SignalR connection, is it worth refactoring to make the regular AJAX methods be hub methods instead? Would it be faster since it's using the already-there connection?
IMHO this would be a misuse of SignalR.
Would it be faster? It really depends on several factors. The first of which is which transport ends up being used. If it's Web Sockets, then, yes, because a message will be sent over a connection that's guaranteed to already be established, but if it's SSE or LongPolling you're still doing a plain old HTTP POST every time to send messages. Second factor is that if the server is allowing Keep-Alive connections, then browsers will keep open TCP connections to the server for some period of time between requests anyway so there would be no overhead in terms of establishing a connection anyway.
Also, let's not forget our powerful friend the GET verb and all the goodness it brings in terms of one of the most important features of the web: caching. If you have a lot of cacheable data, you wouldn't want to be sending real-time messages over web sockets to fetch and retrieve that because you're basically throw out the entire infrastructure of the web if you do. The browsers can't help you any more, you'd have to build all the intelligence yourself with local storage and custom messages which would be, for lack of a better word, insane. :) You also give up the power of proxies caching public data entirely as well which is extremely underrated in terms of how much it can help performance.
My guidance is that you leave what can be simple request/response exactly the way it is today leveraging AJAX and only use a technology like SignalR for what it's intended to be: real-time communications.

Resources