CQRS: what backend to use for my View-Store? - view

My CQRS-based architecture currently has 4 components. It is more of a prototype so nothing is set in stone yet.
CommandProcessor: Gets commands, executes them, etc. (duh ^^),
publishes events. Is Azure-based
ViewProcessor: Gets view-requests,
responds with the view. Subscribes to events to update view store. Is
Azure-based
WebClient: AJAX-heavy web portal, sends commands and
requests (json-)views. Azure-based
DesktopClient: Not much to say,
also sends commands and requests views (undecided if json or some
other format). Obviously not azure-based.
My original approach was to use an InMemory-Viewstore. Azure VMs have quite a bit of memory available and I didn't really see the need to add the complexity Blob-Storage etc.
Additionally, I am trying to minimize the command-execution latency to at least partially get around the whole asynchronous UI problem so that I can (where needed) simulate a synchronous UI with (fast) callbacks (I hope that sentence made sense ^^).
In creating the web client, I noticed a potential flaw in my plan. The url of the ViewProcessor is obviously different to the WebClient-url, so json requests would fail because of the Same-Origin-Policy. Alternatives/Workarounds like jsonp did not seem that attractive because they don't solve the inherent security problem. Implementing the ajax requests to target the WebClient itself would be an option but then I would have redundant functionality (view-store in both webclient and viewprocessor).
I guess saving the views in blob-storage would solve this problem, but I can't shake the feeling that I am overlooking something important/obvious.
Client --command-- CommandProcessor
CommandProcessor --event-- ViewProcessor
ViewProcessor --view-- Blob
(ViewProcessor or CommandProcessor) --notification-- Client
Blob --view-- Client
That scenario would have quite a bit of latency :|

I would look again the blob storage option. We store serialized view objects in blob storage, and it is very fast and stable. Is there some aspect of blob storage that concerns you?
Erick

Related

Microservice requests

I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..

ViewModel Class Design - Should it be on Server Side or Client Side?

In my application, I have a View and it needs data from multiple server side data models.
We have two options.
Call a Single WebSevice once and get ViewModel class object from server side and bind it to the View.
Call multiple WebServices and get different Server Side Model Classes and create a new View Model Class on the client side and bind it to the view.
What's the best approach out of these two options ? Please advise.
If in doubt, consider your UX.
One of the most frustrating things from a user's perspective is to be waiting for your app to respond after they've pressed something.
Every time your app issues a request to your server, users will experience a delay - each additional request increases the length of that delay. Most typical users have a very low tolerance for that sort of thing before they get irritated and decry your app as being "slow".
In the interest of minimising the time required for content to load, keep the number of calls between your client and your server API to an absolute minimum - In general, the fewer calls the better. This leans heavily toward the 'single request, single ViewModel' approach.
Also be mindful of the size of your ViewModel payload; don't just return a huge data-dump to your user when the majority of it will never be seen or used - not only does this waste bandwidth and make things slower, but it also implies the client is going to be doing additional unnecessary work.
This has benefits on your server too; with your server needing to fulfil fewer requests, you will have less work to do later on when scaling up your app to cope with more users.
Lastly, Consider the difference between a simple lightweight "dumb" client which is only responsible for presentation and user interactivity, versus a heavy-weight client application.
By making your Server responsible for generating a ViewModel and doing all the hard work, you can avoid business logic on your client; therefore maintaining clean separation between your Business and Application Layers.
On the other hand, if you require multiple server API calls, then it's likely you'll need a lot more complexity on your client to build your View Model, which risks blurring the line between your Application and your Business Layer.
If you eventually build multiple different client Applications which call the same server, you may find yourself needing to re-use that business logic between those applications; it's easier to re-use business logic which already exists on the server - especially if your client applications use different technologies (e.g. a Web Client and a Mobile App).

Permanent client/server connection in Symfony

I'm doing a pet project with Symfony. In it, I scrape and parse the content of a few websites and APIs (it's all for personal use), and mix everything together. Up until now, I've been separating all the different retrieval processes, and basically it works like this: I have a menu, and each button updates something. When I push it, some website is loaded, the content is parsed and my database is updated. This takes some time, depending on the website loading time, the parsing etc. Basically, when I choose to update something, I lose control and I have no output about the situation until everything is done.
I'm rethinking the whole process, and the way I see this is having a page where I push a button, and a "permanent" connection is established with the server. Then, one thing a time, everything is updated. This could take some time (I would guess even 20 minutes), and therefore the server notifies the client with updates, and possibly even requires the user to make choices (I'm connecting data from different sources, and there are a few edge cases where it just can't automatically guess the right relationships).
I'm thinking about the best way to implement this. At first I thought simple Ajax/jQuery would work, but it seems to me that the relationship between client/server is too permanent and bidirectional to be able to keep everything simple. Then I thought about working with streams and/or websockets, but I don't really know the topics.
What is the best/correct way to do this, especially in a Symfony context?
I don't think this is really tight to Symfony, what you are looking for is called Server Sent Events.
Server-sent events (SSE) is a technology for a browser to get
automatic updates from a server via HTTP connection. The Server-Sent
Events EventSource API is standardized as part of HTML5 by the W3C.
For PHP in general, I generally use the Hoa\EventSource library which makes things easy
For Symfony, you can have a dedicated API endpoint that will use this library.

Faster API access via javascript or server-side libraries

I would like to display information obtained from a web-API. (say, instagram or last.fm)
Is there generally a noticable speed difference between using a serverside (Ruby) or clientside (JS) API library?
I would guess Javascript would be faster since you can contact the API asynchronously once the page has loaded in the client's browser
Just wondering if there is a "best practice" for this since API libraries generally exist for both client-side and server-side.
Each way has pro and cons, one really isn't better or worse than the other for a general case, you have to use what makes sense in your situation.
With a small amount of data / operations, you will most likely be throttled by the web request itself, not the processing power of either JavaScript or Ruby.
I haven't made an exhaustive list, but some general things to consider:
Client Side:
Processing power is pushed over to the client, you use less resources, but performance will vary.
API must support either JSONp or Access-Control-Allow-Headers.
Very slow for large operations on large data sets.
Any API key will be viewable by any visitor to your website.
Initial response time from your server will be quicker, but if you load most of your content from an API request, you'll still have to wait.
Server Side:
Won't expose an API key to the client.
Uses more resources, but can handle larger amounts of data/operations
Will take a longer amount of time to return the initial page.
Sometimes has additional user features/methods since oAuth or other security features are supported.
One big difference is that the JS API does not add load on your server, so it makes it easier to scale your web app.
Also, in general using JS is likely to be faster to the user because the client's browser will be getting the data directly from the web-API server, rather than going via your server.

nodejs: Ajax vs Socket.IO, pros and cons

I thought about getting rid of all client-side Ajax calls (jQuery) and instead use a permanent socket connection (Socket.IO).
Therefore I would use event listeners/emitters client-side and server-side.
Ex. a click event is triggered by user in the browser, client-side emitter pushes the event through socket connection to server. Server-side listener reacts on incoming event, and pushes "done" event back to client. Client's listener reacts on incoming event by fading in DIV element.
Does that make sense at all?
Pros & cons?
There is a lot of common misinformation in this thread that is very inaccurate.
TL/DR;
WebSocket replaces HTTP for applications! It was designed by Google with the help of Microsoft and many other leading companies. All browsers support it. There are no cons.
SocketIO is built on top of the WebSocket protocol (RFC 6455). It was designed to replace AJAX entirely. It does not have scalability issues what-so-ever. It works faster than AJAX while consuming an order of magnitude fewer resources.
AJAX is 10 years old and is built on top of a single JavaScript XMLHTTPRequest function that was added to allow callbacks to servers without reloading the entire page.
In other words, AJAX is a document protocol (HTTP) with a single JavaScript function.
In contrast, WebSocket is a application protocol that was designed to replace HTTP entirely. When you upgrade an HTTP connection (by requesting WebSocket protocol), you enable two-way full duplex communication with the server and no protocol handshaking is involved what so ever. With AJAX, you either must enable keep-alive (which is the same as SocketIO, only older protocol) or, force new HTTP handshakes, which bog down the server, every time you make an AJAX request.
A SocketIO server running on top of Node can handle 100,000 concurrent connections in keep-alive mode using only 4gb of ram and a single CPU, and this limit is caused by the V8 garbage collection engine, not the protocol. You will never, ever achieve this with AJAX, even in your wildest dreams.
Why SocketIO so much faster and consumes so much fewer resources
The main reasons for this is again, WebSocket was designed for applications, and AJAX is a work-around to enable applications on top of a document protocol.
If you dive into the HTTP protocol, and use MVC frameworks, you'll see a single AJAX request will actually transmit 700-900 bytes of protocol load just to AJAX to a URL (without any of your own payload). In striking contrast, WebSocket uses about 10 bytes, or about 70x less data to talk with the server.
Since SocketIO maintains an open connection, there's no handshake, and server response time is limited to round-trip or ping time to the server itself.
There is misinformation that a socket connection is a port connection; it is not. A socket connection is just an entry in a table. Very few resources are consumed, and a single server can provide 1,000,000+ WebSocket connections. An AWS XXL server can and does host 1,000,000+ SocketIO connections.
An AJAX connection will gzip/deflate the entire HTTP headers, decode the headers, encode the headers, and spin up a HTTP server thread to process the request, again, because this is a document protocol; the server was designed to spit out documents a single time.
In contrast, WebSocket simply stores an entry in a table for a connection, approximately 40-80 bytes. That's literally it. No polling occurs, at all.
WebSocket was designed to scale.
As far as SocketIO being messy... This is not the case at all. AJAX is messy, you need promise/response.
With SocketIO, you simply have emitters and receivers; they don't even need to know about each-other; no promise system is needed:
To request a list of users you simply send the server a message...
socket.emit("giveMeTheUsers");
When the server is ready, it will send you back another message. Tada, you're done. So, to process a list of users you simply say what to do when you get a response you're looking for...
socket.on("HereAreTheUsers", showUsers(data) );
That's it. Where is the mess? Well, there is none :) Separation of concerns? Done for you. Locking the client so they know they have to wait? They don't have to wait :) You could get a new list of users whenever... The server could even play back any UI command this way... Clients can connect to each other without even using a server with WebRTC...
Chat system in SocketIO? 10 lines of code. Real-time video conferencing? 80 lines of code Yes... Luke... Join me. use the right protocol for the job... If you're writing an app... use an app protocol.
I think the problem and confusion here is coming from people that are used to using AJAX and thinking they need all the extra promise protocol on the client and a REST API on the back end... Well you don't. :) It's not needed anymore :)
yes, you read that right... a REST API is not needed anymore when you decide to switch to WebSocket. REST is actually outdated. if you write a desktop app, do you communicate with the dialog with REST? No :) That's pretty dumb.
SocketIO, utilizing WebSocket does the same thing for you... you can start to think of the client-side as simple the dialog for your app. You no longer need REST, at all.
In fact, if you try to use REST while using WebSocket, it's just as silly as using REST as the communication protocol for a desktop dialog... there is absolutely no point, at all.
What's that you say Timmy? What about other apps that want to use your app? You should give them access to REST? Timmy... WebSocket has been out for 4 years... Just have them connect to your app using WebSocket, and let them request the messages using that protocol... it will consume 50x fewer resources, be much faster, and 10x easier to develop... Why support the past when you're creating the future?
Sure, there are use cases for REST, but they are all for older and outdated systems... Most people just don't know it yet.
UPDATE:
A LOT of people have been asking me recently how can they start writing an app in 2018 (and now soon 2019) using WebSockets, that the barrier seems really high, that once they play with Socket.IO they don't know where else to turn or what to learn.
Fortunately the last 3 years have been very kind to WebSockets...
There are now 3 major frameworks that support BOTH REST and WebSocket, and even IoT protocols or other minimal/speedy protocols like ZeroMQ, and you don't have to worry about any of it; you just get support for it out of the box.
Note: Although Meteor is by far the most popular, I am leaving it out because although they are a very, very well-funded WebSocket framework, anyone who has coded with Meteor for a few years will tell you, it's an internal mess and a nightmare to scale. Sort of like WordPress is to PHP, it is there, it is popular, but it is not very well made. It's not well-thought out, and it will soon die. Sorry Meteor folks, but check out these 3 other projects compared to Meteor, and you will throw Meteor away the same day :)
With all of the below frameworks, you write your service once, and you get both REST and WebSocket support. What's more, it's a single line of config code to swap between almost any backend database.
Feathers Easiest to use, works the same on the front and backend, and supports most features, Feathers is a collection of light-weight wrappers for existing tools like express. Using awesome tools like feathers-vuex, you can create immutable services that are fully mockable, support REST, WebSocket and other protocols (using Primus), and get free full CRUD operations, including search and pagination, without a single line of code (just some config). Also works really great with generated data like json-schema-faker so you can not only fully mock things, you can mock it with random yet valid data. You can wire up an app to support type-ahead search, create, delete and edit, with no code (just config). As some of you may know, proper code-through-config is the biggest barrier to self-modifying code. Feathers does it right, and will push you towards the front of the pack in the future of app design.
Moleculer Moleculer is unfortunately an order of magnitude better at the backend than Feathers. While feathers will work, and let you scale to infinity, feathers simply doesn't even begin to think about things like production clustering, live server consoles, fault tolerance, piping logs out of the box, or API Gateways (while I've built a production API gateway out of Feathers, Moleculer does it way, way better). Moleculer is also the fastest growing, both in popularity and new features, than any WebSocket framework.
The winning strike with Moleculer is you can use a Feathers or ActionHero front-end with a Moleculer backend, and although you lose some generators, you gain a lot of production quality.
Because of this I recommend learning Feathers on the front and backend, and once you make your first app, try switching your backend to Moleculer. Moleculer is harder to get started with, but only because it solves all the scaling problems for you, and this information can confuse newer users.
ActionHero Listed here as a viable alternative, but Feathers and Moleculer are better implementations. If anything about ActionHero doesn't Jive with you, don't use it; there are two better ways above that give you more, faster.
NOTE: API Gateways are the future, and all 3 of the above support them, but Moleculer literally gives you it out of the box. An API gateway lets you massage your client interaction, allowing caching, memoization, client-to-client messaging, blacklisting, registration, fault tolerance and all other scaling issues to be handled by a single platform component. Coupling your API Gateway with Kubernetes will let you scale to infinity with the least amount of problems possible. It is the best design method available for scalable apps.
Update for 2021:
The industry has evolved so much that you don't even need to pay attention to the protocol. GraphQL now uses WebSockets by default! Just look up how to use subscriptions, and you're done. The fastest way to handle it will occur for you.
If you use Vue, React or Angular, you're in luck, because there is a native GraphQL implementation for you! Just call your data from the server using a GraphQL subscription, and that data object will stay up to date and reactive on it's own.
GraphQL will even fall-back to REST for you when you need to use legacy systems, and subscriptions will still update using sockets. Everything is solved when you move to GraphQL.
Yes, if you thought "WTH?!?" when you heard you can simply subscribe, like with FireBase, to a server object, and it will update itself for you. Yes. That's now true. Just use a GraphQL subscription. It will use WebSockets.
Chat system? 1 line of code.
Real time video system? 1 line of code.
Video game with 10mb of open world data shared across 1m real-time users? 1 line of code. The code is just your GQL query now.
As long as you build or use the right back-end, all this realtime stuff is now done for you with GQL subscriptions. Make the switch as soon as you can and stop worrying about protocols.
Socket.IO uses persistent connection between client and server, so you will reach a maximum limit of concurrent connections depending on the resources you have on server side, while more Ajax async requests can be served with the same resources.
Socket.IO is mainly designed for realtime and bi-directional connections between client and server and in some applications there is no need to keep permanent connections. On the other hand Ajax async connections should pass the HTTP connection setup phase and send header data and all cookies with every request.
Socket.IO has been designed as a single process server and may have scalability issues depending server resources that you are bound to.
Socket.IO in not well suited for applications when you are better to cache results of client requests.
Socket.IO applications face with difficulties with SEO optimization and search engine indexing.
Socket.IO is not a standard and not equivalent to W3C Web Socket API, It uses current Web Socket API if browser supports, socket.io created by a person to resolve cross browser compatibility in real time apps and is so young, about 1 year old. Its learning curve, less developers and community resources compared with ajax/jquery, long term maintenance and less need or better options in future may be important for developer teams to make their code based on socket.io or not.
Sending one way messages and invoking callbacks to them can get very messy.
$.get('/api', sendData, returnFunction); is cleaner than
socket.emit('sendApi', sendData); socket.on('receiveApi', returnFunction);
Which is why dnode and nowjs were built on top of socket.io to make things manageable. Still event driven but without giving up callbacks.

Resources