In which layer should I place websocket calls in a layered architecture? - model-view-controller

I'm starting to work with NestJS, a Javascript framework for building RESTful api's.
The framework encourages you to work with multilayered architecture, separating controllers, services and repositories.
In this project of mine, I have a RESTful api that talks to my website, and I'm currently in the need of using websockets for several reasons.
The general behavior of a module that will use the websocket protocol (in my app) is:
client makes http request with a message -> backend validates -> backend broadcasts the message through websockets.
Much like a chat application.
The problem is that I'm having a hard time figuring out where is the best place to put these broadcast calls. It seems like every layer is the wrong place.
I won't even consider the repository layer. Talking strictly about whether should I choose the controller or the service, they both seem wrong.
I don't want to send websocket messages in the same place where I handle HTTP requests, and I sure don't want to send them in the same place were I deal with business rules.
I've come to an approach, but I'm not sure if it's necessary to do it. I'm using Redis as caching mechanism, and as a store to Socket.io, so that I can horizontally scale my app and consistently send broadcast messages through sockets. Redis also has a Pub/Sub feature within it, and an awesome notification system called "keyspace notifications" that will publish messages to channels depending on the action performed in the cache memory store. Long story short, HTTP requests changes resources in the backend, these changes are reflected in my Redis cache and, with a (to-be) well crafted key design system, I can listen to the modifications in the cache from another, separate, module and fire the necessary broadcasts.
An illustration of the structure:
                     
Actually, using the correct keywords I found this article in which the author is doing something similar to what I'm proposing here.

Related

Why do microservices need to communicate with each other?

I'm new with this but in case if we have fronted + few different microservices I just don't get why do we need any of them to communicate with each other if we can manipulate between their data via axios on frontend. What is the purpose of event bus and event-driven architecture in the case if we use both frontend and backend microservices?
Okay, for my example I'm using 5 microservices. There are 2 of them:
Shopping cart
Posts
And I want to access posts microservice directly, pass their data through the event bus, so the shopping cart microservice would have its information. The reason is that posts and shopping cart both have different data bases, so is a good example doing this that way, or just through frontend with axios service?
What you are suggesting could be true for a very simple application, which hardly even needs an architecture such as microservice. It is clear why services need communication:
some services are not even accessable (for various reasons such as security) in client, so a change in them must be initiated in other backend services with such priviledge
some changes are raised in backend services and not client, e.g. a cronjob for doing some task
it would question reusability as you must consider the service to be used by not only client, but in any environment
what would happen if you want your services to be used by public, what if they do not implement part of the needed logic intentionally or by mistake
making client do everything could be so complex and would reduce flexibility
some services such as authentication are acting as a supporting mechanism to ensure safety (or anything other than main logic), these should be communicated directly by the service needing them
As for second part of your question, it depends on several factors like your business needs & models, desired scalability, performance, availability, etc. so the right answer or in another say, the answer that fits would be different.
For your problem, using event bus which is async would not be a good solution as it would hurt consistency in your services. Instead synchronous ways like a simple API call in your posts service would be a better idea.

How to hide multiple websockets services behind an API Gateway?

I'm working on a project that has multiple microservices behind a API Gateway and some of them expose WebSockets API.
The WebApp needs to be able to interact with those APIs.
.
Those WebSocket API can be built with frameworks that have their own protocols, using socket.io or not etc.
The main goal of this reflexion is to be able to scale and keep flexibility on my WebSockets APIs implementation.
I thought about two solutions :
The first one is to simply proxy requests on the gateway, the webapp will have to open a websocket for each microservice, this looks like a design flaw to me.
The other one is to create a "Notifier Service" that will be a WebSocket Server and that will keep outgoing connections with users and be able to bridge the incomming messages and outgoing ones based on a custom protocol. The drawback is that i need to implement a pub/sub system (or find a solution for). I didn't dig a lot into it but it looks like a lot of work and a homemade solution, i'm not a fan of it.
I didn't find articles that give feedback after exposing such an architecture and websockets in production, i was hoping to find some here.
I agree with you that a simple proxy solution seems to be inadequate as it exposes the internal interfaces to the clients. I found two articles which I think are addressing your problem:
The API Gateway Pattern
Pattern: API Gateway / Backends for Frontends
Both talk about the issues that you have already mentioned in your question: Protocol translation is an advantage of this architecture as it decouples the external API from the protocols internally used. On the other hand, increased complexity due to having another component to be maintained is mentioned as a drawback.
The second article also suggests some existing libraries (namley Netty, Spring Reactor, NodeJS) that can be used to implement an API gateway, so you might want to spend some time evaluating these for your project.

Communication pattern for MicroFrontends to MicroService backend trough single channel/websocket

we are currently facing some tough architectural questions about integrating multiple Web Components with individual backend services in a composite web UI to one smooth web application.
There are some constraints an (negotiable) design decisions:
A MicroService should serve it's own frontend (WebComponent), we would like to use HTML Imports to allow including such a WebComponent to the composite UI
A frontend WebComponent needs to be able to recieve live updates/events from it's backend MicroService
The page (sum of Web Components used in the composite UI) shall only use one connection/permanent occupied port to communicate with the backend
I made a sketch representing our abstract / non-technical requirements for further discussion:
As of my understanding, the problem could be rephrased to: How do we
a) concentrate communication on entering
b) distribute communication on exiting
the single transport path on both ends.
This two tasks need to be solved on both sides of the transport path, eg. the backend and the frontend.
For the backend, I am quite hopefull that adopting the BFF pattern as not only described by Sam Newman could serve our needs. The right half (backend) side of the above sketch could then look similar to this:
The transport path might be best served using standardized web technologies, eg. https and websocket (wss) for the most times needed, bidirectional communication. I'm keen to learn about alternatives with an equivalent high adoption rate in the web technology sector.
For the frontend we are currently lacking ideas and knowledge about previously described patterns or frameworks.
The tricky thing is, that multiple basically independent WebComponents need to find together for using the ONE central communication path. If the frontend would be realized by implementing one (big) Angular application for example, we would implement and inject a "BackendConnectorService" (Name to be discussed) and inject in to our various components.
But since we would like to use decoupled Web Components, none such background layer for shared business logic and dependency injection exists. Should we write a proprietary JS-library, which will be loaded to the window-context if not present yet from every of our components, and will be used (by convention) to communicate with the backend?
This would roughly integrate to the sketch like below:
Are we thinking/designing our application wrong?
I'm thankful for every reasonable idea or hint for a proven pattern/framework.
Also your view on the problem and the architecture might be helpful to circumnavigate the issues we are facing right now.
I would go with same approach you are using on the backend for the frontend.
Create an HTTP or WS gateway, that frontend components will poll with requests. It will connect to backend BFF and there you solved all your problems. Anytime you want to swap your components, transfer or architecture, one is not dependent on the other.

Real-time client-server interactivity using Websharper

I'm just learning Websharper but, in the short term, I have a business problem that I am trying to solve. I've written a server and WPF-based client that allows the user to vary inputs using controls like sliders and obtain feedback from the server in real-time (i.e. there is no "submit" button).
I'd like to convert this desktop GUI app into a web app using Websharper. How might I do the background request-response to the server triggered by the user sliding and slider and resulting in feedback visualized asynchronously in the web page?
I imagine the most obvious way is to simply make a bunch of [<Rpc>]-decorated methods for your server-side logic and then simply invoke them on any change in the UI. IIRC, Websharper handles the client-server transition transparently, i.e. if you call a server method, the necessary proxy will fire to get you the results.
As has been pointed out you may rely on RPC methods, but this might give unacceptable latency.
At IntelliFactory we are now working on a project that required asynchronous bi-directional and low-latency communication between the server and the client. We ended up using the WebSocket protocol. We are planning to document and release the code into a reusable library soon for people with similar requirements.
The prime advantage of the WebSocket protocol for our purpose was that it allowed to maintain state on the server side of the connection. Our server is a worker role running in Windows Azure. The server is selected randomly by the Azure load balancer when the WebSocket connection is established, and the client talks to the same server while the connection is open. This allows maintaining expensive to initialize per-connection state on the server.
The disadvantage of the WebSocket protocol is lack of support by older browsers. A portable low-latency alternative is SignalR that uses some form of HTTP polling to emulate the functionality on older browsers. Unfortunately we have so far failed to adapt SignalR to our requirements on Azure. It should theoretically be possible but since AFAIK SignalR follows a mostly stateless design this would require coding up a router to redirect messages and "undo" the effects of Azure load balancer.
I don't know integration points of WebSharper, but Rx has many nice concepts and functions for functional and reactive event processing like throttling (needs for slider and async network calls).
https://github.com/Reactive-Extensions/RxJS
https://github.com/panesofglass/FSharp.Reactive

nodejs: Ajax vs Socket.IO, pros and cons

I thought about getting rid of all client-side Ajax calls (jQuery) and instead use a permanent socket connection (Socket.IO).
Therefore I would use event listeners/emitters client-side and server-side.
Ex. a click event is triggered by user in the browser, client-side emitter pushes the event through socket connection to server. Server-side listener reacts on incoming event, and pushes "done" event back to client. Client's listener reacts on incoming event by fading in DIV element.
Does that make sense at all?
Pros & cons?
There is a lot of common misinformation in this thread that is very inaccurate.
TL/DR;
WebSocket replaces HTTP for applications! It was designed by Google with the help of Microsoft and many other leading companies. All browsers support it. There are no cons.
SocketIO is built on top of the WebSocket protocol (RFC 6455). It was designed to replace AJAX entirely. It does not have scalability issues what-so-ever. It works faster than AJAX while consuming an order of magnitude fewer resources.
AJAX is 10 years old and is built on top of a single JavaScript XMLHTTPRequest function that was added to allow callbacks to servers without reloading the entire page.
In other words, AJAX is a document protocol (HTTP) with a single JavaScript function.
In contrast, WebSocket is a application protocol that was designed to replace HTTP entirely. When you upgrade an HTTP connection (by requesting WebSocket protocol), you enable two-way full duplex communication with the server and no protocol handshaking is involved what so ever. With AJAX, you either must enable keep-alive (which is the same as SocketIO, only older protocol) or, force new HTTP handshakes, which bog down the server, every time you make an AJAX request.
A SocketIO server running on top of Node can handle 100,000 concurrent connections in keep-alive mode using only 4gb of ram and a single CPU, and this limit is caused by the V8 garbage collection engine, not the protocol. You will never, ever achieve this with AJAX, even in your wildest dreams.
Why SocketIO so much faster and consumes so much fewer resources
The main reasons for this is again, WebSocket was designed for applications, and AJAX is a work-around to enable applications on top of a document protocol.
If you dive into the HTTP protocol, and use MVC frameworks, you'll see a single AJAX request will actually transmit 700-900 bytes of protocol load just to AJAX to a URL (without any of your own payload). In striking contrast, WebSocket uses about 10 bytes, or about 70x less data to talk with the server.
Since SocketIO maintains an open connection, there's no handshake, and server response time is limited to round-trip or ping time to the server itself.
There is misinformation that a socket connection is a port connection; it is not. A socket connection is just an entry in a table. Very few resources are consumed, and a single server can provide 1,000,000+ WebSocket connections. An AWS XXL server can and does host 1,000,000+ SocketIO connections.
An AJAX connection will gzip/deflate the entire HTTP headers, decode the headers, encode the headers, and spin up a HTTP server thread to process the request, again, because this is a document protocol; the server was designed to spit out documents a single time.
In contrast, WebSocket simply stores an entry in a table for a connection, approximately 40-80 bytes. That's literally it. No polling occurs, at all.
WebSocket was designed to scale.
As far as SocketIO being messy... This is not the case at all. AJAX is messy, you need promise/response.
With SocketIO, you simply have emitters and receivers; they don't even need to know about each-other; no promise system is needed:
To request a list of users you simply send the server a message...
socket.emit("giveMeTheUsers");
When the server is ready, it will send you back another message. Tada, you're done. So, to process a list of users you simply say what to do when you get a response you're looking for...
socket.on("HereAreTheUsers", showUsers(data) );
That's it. Where is the mess? Well, there is none :) Separation of concerns? Done for you. Locking the client so they know they have to wait? They don't have to wait :) You could get a new list of users whenever... The server could even play back any UI command this way... Clients can connect to each other without even using a server with WebRTC...
Chat system in SocketIO? 10 lines of code. Real-time video conferencing? 80 lines of code Yes... Luke... Join me. use the right protocol for the job... If you're writing an app... use an app protocol.
I think the problem and confusion here is coming from people that are used to using AJAX and thinking they need all the extra promise protocol on the client and a REST API on the back end... Well you don't. :) It's not needed anymore :)
yes, you read that right... a REST API is not needed anymore when you decide to switch to WebSocket. REST is actually outdated. if you write a desktop app, do you communicate with the dialog with REST? No :) That's pretty dumb.
SocketIO, utilizing WebSocket does the same thing for you... you can start to think of the client-side as simple the dialog for your app. You no longer need REST, at all.
In fact, if you try to use REST while using WebSocket, it's just as silly as using REST as the communication protocol for a desktop dialog... there is absolutely no point, at all.
What's that you say Timmy? What about other apps that want to use your app? You should give them access to REST? Timmy... WebSocket has been out for 4 years... Just have them connect to your app using WebSocket, and let them request the messages using that protocol... it will consume 50x fewer resources, be much faster, and 10x easier to develop... Why support the past when you're creating the future?
Sure, there are use cases for REST, but they are all for older and outdated systems... Most people just don't know it yet.
UPDATE:
A LOT of people have been asking me recently how can they start writing an app in 2018 (and now soon 2019) using WebSockets, that the barrier seems really high, that once they play with Socket.IO they don't know where else to turn or what to learn.
Fortunately the last 3 years have been very kind to WebSockets...
There are now 3 major frameworks that support BOTH REST and WebSocket, and even IoT protocols or other minimal/speedy protocols like ZeroMQ, and you don't have to worry about any of it; you just get support for it out of the box.
Note: Although Meteor is by far the most popular, I am leaving it out because although they are a very, very well-funded WebSocket framework, anyone who has coded with Meteor for a few years will tell you, it's an internal mess and a nightmare to scale. Sort of like WordPress is to PHP, it is there, it is popular, but it is not very well made. It's not well-thought out, and it will soon die. Sorry Meteor folks, but check out these 3 other projects compared to Meteor, and you will throw Meteor away the same day :)
With all of the below frameworks, you write your service once, and you get both REST and WebSocket support. What's more, it's a single line of config code to swap between almost any backend database.
Feathers Easiest to use, works the same on the front and backend, and supports most features, Feathers is a collection of light-weight wrappers for existing tools like express. Using awesome tools like feathers-vuex, you can create immutable services that are fully mockable, support REST, WebSocket and other protocols (using Primus), and get free full CRUD operations, including search and pagination, without a single line of code (just some config). Also works really great with generated data like json-schema-faker so you can not only fully mock things, you can mock it with random yet valid data. You can wire up an app to support type-ahead search, create, delete and edit, with no code (just config). As some of you may know, proper code-through-config is the biggest barrier to self-modifying code. Feathers does it right, and will push you towards the front of the pack in the future of app design.
Moleculer Moleculer is unfortunately an order of magnitude better at the backend than Feathers. While feathers will work, and let you scale to infinity, feathers simply doesn't even begin to think about things like production clustering, live server consoles, fault tolerance, piping logs out of the box, or API Gateways (while I've built a production API gateway out of Feathers, Moleculer does it way, way better). Moleculer is also the fastest growing, both in popularity and new features, than any WebSocket framework.
The winning strike with Moleculer is you can use a Feathers or ActionHero front-end with a Moleculer backend, and although you lose some generators, you gain a lot of production quality.
Because of this I recommend learning Feathers on the front and backend, and once you make your first app, try switching your backend to Moleculer. Moleculer is harder to get started with, but only because it solves all the scaling problems for you, and this information can confuse newer users.
ActionHero Listed here as a viable alternative, but Feathers and Moleculer are better implementations. If anything about ActionHero doesn't Jive with you, don't use it; there are two better ways above that give you more, faster.
NOTE: API Gateways are the future, and all 3 of the above support them, but Moleculer literally gives you it out of the box. An API gateway lets you massage your client interaction, allowing caching, memoization, client-to-client messaging, blacklisting, registration, fault tolerance and all other scaling issues to be handled by a single platform component. Coupling your API Gateway with Kubernetes will let you scale to infinity with the least amount of problems possible. It is the best design method available for scalable apps.
Update for 2021:
The industry has evolved so much that you don't even need to pay attention to the protocol. GraphQL now uses WebSockets by default! Just look up how to use subscriptions, and you're done. The fastest way to handle it will occur for you.
If you use Vue, React or Angular, you're in luck, because there is a native GraphQL implementation for you! Just call your data from the server using a GraphQL subscription, and that data object will stay up to date and reactive on it's own.
GraphQL will even fall-back to REST for you when you need to use legacy systems, and subscriptions will still update using sockets. Everything is solved when you move to GraphQL.
Yes, if you thought "WTH?!?" when you heard you can simply subscribe, like with FireBase, to a server object, and it will update itself for you. Yes. That's now true. Just use a GraphQL subscription. It will use WebSockets.
Chat system? 1 line of code.
Real time video system? 1 line of code.
Video game with 10mb of open world data shared across 1m real-time users? 1 line of code. The code is just your GQL query now.
As long as you build or use the right back-end, all this realtime stuff is now done for you with GQL subscriptions. Make the switch as soon as you can and stop worrying about protocols.
Socket.IO uses persistent connection between client and server, so you will reach a maximum limit of concurrent connections depending on the resources you have on server side, while more Ajax async requests can be served with the same resources.
Socket.IO is mainly designed for realtime and bi-directional connections between client and server and in some applications there is no need to keep permanent connections. On the other hand Ajax async connections should pass the HTTP connection setup phase and send header data and all cookies with every request.
Socket.IO has been designed as a single process server and may have scalability issues depending server resources that you are bound to.
Socket.IO in not well suited for applications when you are better to cache results of client requests.
Socket.IO applications face with difficulties with SEO optimization and search engine indexing.
Socket.IO is not a standard and not equivalent to W3C Web Socket API, It uses current Web Socket API if browser supports, socket.io created by a person to resolve cross browser compatibility in real time apps and is so young, about 1 year old. Its learning curve, less developers and community resources compared with ajax/jquery, long term maintenance and less need or better options in future may be important for developer teams to make their code based on socket.io or not.
Sending one way messages and invoking callbacks to them can get very messy.
$.get('/api', sendData, returnFunction); is cleaner than
socket.emit('sendApi', sendData); socket.on('receiveApi', returnFunction);
Which is why dnode and nowjs were built on top of socket.io to make things manageable. Still event driven but without giving up callbacks.

Resources