What's an acceptable latency for service/message bus - amazon-ec2

I'm developing a simple RPC style message bus where microservices will live on different virtualized machines.
I'm just testing a simple proof of concept using c4.large instances on EC2 for RabbitMQ, the server and the client.
I'm noticing round trips to the server and back are ~100ms with ~20ms for connecting to the amqp server and another ~80ms for returning a simple string.
This seems quite high to have an overhead of 100ms for each RPC request. Is there a typical acceptable latency for this style of architecture? Should I be looking at different tools?

A message bus is typically used in applications to support asynchronous processing. A very simple example of this would be sending emails in response to a state change that happened in the application.
In this regard, 100ms is quite fast.
If you're trying to keep synchronous operations in your application fast, you won't get happy with making a message bus part of it.
Note that the above statement refers to external message buses. In-process message delivery mechanisms can be built with much less latency, but this is probably not what you need in the context of a microservices architecture.
Should I be looking at different tools?
No, you have appropriate tools for a microservices architecture. But you should ask yourself the following questions:
Is a microservices architecture the right choice for my application?
If yes, do I have suitable service boundaries?

Related

KDB+/Q: GRPC implementation?

gRPC is a modern open source high performance RPC framework that can
run in any environment. It can efficiently connect services in and
across data centers with pluggable support for load balancing,
tracing, health checking and authentication. It is also applicable in
last mile of distributed computing to connect devices, mobile
applications and browsers to backend services.
I'm finding GRPC is becoming increasingly more pertinent in backend infrastructure, and would've liked to have it in my favorite language/tsdb kdb+/q.
I was surprised to find that kdb+ does not have a grpc implementation. Obviously, the (https://code.kx.com/q/interfaces/protobuf/)
package doesn't support the parsing of rpc's, is there anything quantitatively preventing there being a KDB+ implementation of the rpc requests/services etc. found in grpc?
Why would one not want to implement rpc's (grpc) in kdb+ and would it be a good idea to wrap a c++/c implemetation therin inorder to achieve this functionality.
Thanks for your advice.
Interesting post:
https://zimarev.com/blog/event-sourcing/myth-busting/2020-07-09-overselling-event-sourcing/
outlines event sourcing, which I think might be a better fit for kdb?
What is the main issue with services using RPC calls to exchange information? Well, it’s the high degree of coupling introduced by RPC by its nature. The whole group of services or even the whole system can go down if only one of the services stops working. This approach diminishes the whole idea of independent components.
In my practice I hardly encounter any need to use RPC for inter-service communication. Partially because I often use Event Sourcing, more about it later. But we always use asynchronous communication and exchange information between services using events, even without Event Sourcing.
For example, an order microservice in an e-commerce system needs customer data from the customer microservice. These dependencies between microservices are not ideal. Other microservices can go down and synchronous RESTful requests over https do not scale well due to their blocking nature. If there was a way to completely eliminate dependencies between microservices completely the result would be a more robust architecture with less bottlenecks.
You don’t need Event Sourcing to fix this issue. Event-driven systems are perfectly capable of doing that. Event Sourcing can eliminate some of the associated issues like two-phase commits, but again, not a requirement to remove the temporal coupling from your system.

Reactive approach of Marklogic database

Does Marklogic supports backpressure or allow to send data in chunks that is reactive approach ?
'Reactive' is a fairly new term describing a particular incarnation of old concepts common in server and database technologies, but fairly new to modern client and middle-tier programming.
I am assuming the question is prompted by the need/desire to work within an existing 'Reactive' framework (such as vert.x or Rx/Java). For that question, the answer is 'no' - there is not an 'official' API which integrates directly with these frameworks to my knowledge. There are community APIs which I have not personally used, an example is https://github.com/etourdot/vertx-marklogic (reactive, vert.x marklogic API).
MarkLogic is a 'reactive' design internally in that it implements the functionality the modern 'reactive' term is used to describe -- but does not expose any standard 'reactive APIs' for this (there are very few standards in this area). Code running within MarkLogic server (xquery,javascript) implicitly benefits from this - although there is not an explicit backpressure API, a side effect of single threaded blocking IO (from the app perspective) is that the equivalent of 'back pressure' is implemented by implicit flow control of the IO APIS - you cannot over drive a properly configured ML server on a single thread doing blocking IO. Connections to an overloaded server will take longer and eventually time out ('backpressure' :)
Similarly, (most of) the external APIs (REST, XCC) are also blocking, single threaded.
The server core manages rate control via a variety of methods such as actively managing the TCP connection queue size, keep alive times, numbers of active threads etc.
In general the server does a very good job at this without explicit low level programming needed, balancing the latency across all clients. If this needs improving, the administration guides have good direction on how to tune the various parameters so the system behaves well on its own.
If you want to implement a per-connection client aware 'reactive' API you will need to implement it yourself. This can be done using the same techniques used for other blocking IO APis -- i.e. either use multiple threads or non-blocking IO. Some of the ML SDK's have provision for non-blocking IO or control over timeouts which can be used to implement a 'reactive' API.
Similarly, code running in the server itself (XQuery or JavaScript) can implement 'reactive' type behaviour by making use of the task queue -- as exposed by the xdmp:spawn-xxx apis. This is done in many libraries to manage bulk ingest. Care must be taken to carefully control the amount of concurrency as you can easily overload the server by spawning too many concurrent requests. Managing state is a bit tricky as there is a interaction/opposition between the transaction model and task creation -- the former generally presenting an idempotent view of data that can be incongruous with the concept of 'current' wrt asynchronous tasks.

web Api application subscribing to a queue. Is it a good idea?

We are designing a reporting system using microservice architecture. All the services are supposed to be subscribers to the event bus and they communicate by raising events. We also decided to expose each of our services using REST api. Now the question is , is it a good idea to create our services as web api [RESTful] applications which are also subscribers to the event bus? so basically there are 2 ponits of entry to each service - api and events. I have a feeling that we should separate out these 2 as these are 2 different concerns. Any ideas?
Since Microservices architecture are Un-opinionated software design. So you may get different answers on this questions.
Yes, REST and Event based are two different things but sometime both combined gives design to achieve better flexibility.
Answering to your concerns, I don't see any harm if REST APIs also subscribe to a queue as long as you can maintain both of them i.e changes to message does not have any impact of APIs and you have proper fallback and Eventual consistency mechanism in place. you can check discussion . There are already few project which tried it such as nakadi and ponte.
So It all depends on your service's communication behaviour to choose between REST APIs and Event-Based design Or Both.
What you do is based on your requirement you can choose REST APIs where you see synchronous behaviour between services
and go with Event based design where you find services needs asynchronous behaviour, there is no harm combining both also.
Ideally for inter-process communication protocol it is better to go with messaging and for client-service REST APIs are best fitted.
Check the Communication style in microservices.io
REST based Architecture
Advantage
Request/Response is easy and best fitted when you need synchronous environments.
Simpler system since there in no intermediate broker
Promotes orchestration i.e Service can take action based on response of other service.
Drawback
Services needs to discover locations of service instances.
One to one Mapping between services.
Rest used HTTP which is general purpose protocol built on top of TCP/IP which adds enormous amount of overhead when using it to pass messages.
Event Driven Architecture
Advantage
Event-driven architectures are appealing to API developers because they function very well in asynchronous environments.
Loose coupling since it decouples services as on a event of once service multiple services can take action based on application requirement. it is easy to plug-in any new consumer to producer.
Improved availability since the message broker buffers messages until the consumer is able to process them.
Drawback
Additional complexity of message broker, which must be highly available
Debugging an event request is not that easy.

Should we prefer SSE + REST over websocket when using HTTP/2?

When using websocket, we need a dedicated connection for bidirectionnel communication. If we use http/2 we have a second connection maintained by the server.
In that case, using websocket seems to introduce an unecessary overhead because with SSE and regular http request we can have the advantage of bidirectionnal communication over a single HTTP/2 connection.
What do you think?
Using 2 streams in one multiplexed HTTP/2 TCP connection (one stream for server-to-client communication - Server Sent Events (SSE), and one stream for client-to-server communication and normal HTTP communication) versus using 2 TCP connections (one for normal HTTP communication and one for WebSocket) is not easy to compare.
Probably the mileage will vary depending on applications.
Overhead ? Well, certainly the number of connections doubles up.
However, WebSocket can compress messages, while SSE cannot.
Flexibility ? If the connections are separated, they can use different encryptions. HTTP/2 typically requires very strong encryption, which may limit performance.
On the other hand, WebSocket does not require TLS.
Does clear-text WebSocket work in mobile networks ? In the experience I have, it depends. Antiviruses, application firewalls, mobile operators may limit WebSocket traffic, or make it less reliable, depending on the country you operate.
API availability ? WebSocket is a wider deployed and recognized standard; for example in Java there is an official API (javax.websocket) and another is coming up (java.net.websocket).
I think SSE is a technically inferior solution for bidirectional web communication and as a technology it did not become very popular (no standard APIs, no books, etc - in comparison with WebSocket).
I would not be surprised if it gets dropped from HTML5, and I would not miss it, despite being one of the first to implement it in Jetty.
Depending on what you are interested in, you have to do your benchmarks or evaluate the technology for your particular case.
From the perspective of a web developer, the difference between Websockets and a REST interface is semantics. REST uses a request/response model where every message from the server is the response to a message from the client. WebSockets, on the other hand, allow both the server and the client to push messages at any time without any relation to a previous request.
Which technique to use depends on what makes more sense in the context of your application. Sure, you can use some tricks to simulate the behavior of one technology with the other, but it is usually preferably to use the one which fits your communication model better when used by-the-book.
Server-sent events are a rather new technology which isn't yet supported by all major browsers, so it is not yet an option for a serious web application.
It depends a lot on what kind of application you want to implement. WebSocket is more suitable if you really need a bidirectional communication between server and client, but you will have to implement all the communication protocol and it might not be well supported by all IT infrastructures (some firewall, proxy or load balancers may not support WebSockets). So if you do not need a 100% bidirectional link, I would advise to use SSE with REST requests for additional information from client to server.
But on the other hand, SSE comes with certain caveats, like for instance in Javascript implementation, you can not overwrite headers. The only solution is to pass query parameters, but then you can face an issue with the query string size limit.
So, again, choosing between SSE and WebSockets really depends on the kind of application you need to implement.
A few months ago, I had written a blog post that may give you some information: http://streamdata.io/blog/push-sse-vs-websockets/. Although at that time we didn't consider HTTP2, this can help know what question you need to ask yourself.

Enterprise Messaging API with Web Services for High Performance?

Does combining an Enterprise Messaging solution with Web Services result in a real performance gain over simple HTTP requests over sockets?
(if implementation details will help, interested in JMS with a SOAP webservice)
as always, it depends. If you are sending xml documents over your socket using the http-protocol, then no.. your performance will be roughly the same as the enterprise frameworks (because, web services are effectively just that, data encoded in the soap protocol, transmitted over the http protocol over a socket).
If you are sending a more lightweight data stream over a socket, then you will probably get better performance.
Ultimately, it depends on what you're sending, how much of it there is, and how often you're sending it.
Typically one uses a messaging solution for message reliability, rather than performance. If you need guaranteed message delivery, use something like JMS.
HTTP is so lightweight, I can't imagine that any other messaging solution would have higher performance.

Resources