Communication Microservices/Microfrontend Architecture - microservices

When reading literature about microfrontends, I always see that the frontend is composed of microfrontends that different teams develop. Each microfrontend has at least one backend. What I do not see is that the backends communicate with each other. Is that right? Are they that way separated that they can completely live with out any communication between the backends?

Benefits of microfrontends:
Ability to deploy UI code independently between teams/domains
Ability to use different technologies per team
Hard boundaries/encapsulation of UI code for that domain
Faster build, test, deploy times given smaller codebase
Optimal microfrontend architecture:
Assuming a single user facing webapp:
A "platform" microfrontend serves the "skeleton" page
From the user's perspective, it's a single site with one domain name (avoids CORS issues)
The "skeleton" page calls out to team specific frontends, based on namespaced routes, typically this path-based routing is handled via an ingress or reverse proxy (e.g. /namespace/accounting goes to the accounting frontend)
Frontend service (microfrontend) is strictly responsible for presentation issues, and often calls out to other backend services that have ownership over various data.
Frontend service contains logic for both serving static assets/components, and for handling ajax requests/composing UI specific data.
Summary:
Your frontend service will generally have to call out to backend services to compose data for presentational purposes. For example, if you need to display user data, you will likely need to call out to some UserService or AccountService to get additional details about that user. I don't recommend attempting to build a separate datastore with replicated data that's specific to the frontend service.
The frontend service typically shouldn't contain business logic; however, there's an argument to be made that for smaller apps/earlier on it can make sense to have a single service that handles both the UI and business logic for the same domain. It's generally the lesser evil to have services that are too broad rather than too narrow.
However, in a microservices architecture it's still important that you keep necessary dependencies between services to a minimum. A common problem is running into "dependency" hell, where you call Service A, which needs to call Service B and so on, which makes the architecture slow and brittle. A frontend service would typically make calls to services that are only "one layer deep", and then compose those responses into a single display data/payload.
Finally, it's very important that you choose the boundaries of your frontend service/domain wisely. It shouldn't be the case that you have many frontend services that all need to frequently call out to the same backend services. Better to start with a single broad frontend service and break it down further as you become more confident in the boundaries.

Related

Why do microservices need to communicate with each other?

I'm new with this but in case if we have fronted + few different microservices I just don't get why do we need any of them to communicate with each other if we can manipulate between their data via axios on frontend. What is the purpose of event bus and event-driven architecture in the case if we use both frontend and backend microservices?
Okay, for my example I'm using 5 microservices. There are 2 of them:
Shopping cart
Posts
And I want to access posts microservice directly, pass their data through the event bus, so the shopping cart microservice would have its information. The reason is that posts and shopping cart both have different data bases, so is a good example doing this that way, or just through frontend with axios service?
What you are suggesting could be true for a very simple application, which hardly even needs an architecture such as microservice. It is clear why services need communication:
some services are not even accessable (for various reasons such as security) in client, so a change in them must be initiated in other backend services with such priviledge
some changes are raised in backend services and not client, e.g. a cronjob for doing some task
it would question reusability as you must consider the service to be used by not only client, but in any environment
what would happen if you want your services to be used by public, what if they do not implement part of the needed logic intentionally or by mistake
making client do everything could be so complex and would reduce flexibility
some services such as authentication are acting as a supporting mechanism to ensure safety (or anything other than main logic), these should be communicated directly by the service needing them
As for second part of your question, it depends on several factors like your business needs & models, desired scalability, performance, availability, etc. so the right answer or in another say, the answer that fits would be different.
For your problem, using event bus which is async would not be a good solution as it would hurt consistency in your services. Instead synchronous ways like a simple API call in your posts service would be a better idea.

Micro services using Service fabric where to place controllers

I have a micro-service project with multiple services in .NET Core. When it comes to placing the controllers, there are 2 approaches:
Place the controllers in respective Micro Services, with Startup.cs in each micro-service.
Place all controllers in a separate project and have them call the individual services.
I think the 1st approach will involve less coding effort but the 2nd one separates controllers from actual services using interfaces etc.
Is there a difference in terms of how they are created and managed in Fabric using both approaches.
This is very broad topic and can raise points for discussion because it all depends on preferences, experiences and tech stacks. I will add my two cents, but do not consider it as a rule, just my view for both approaches.
First approach (APIs for each service isolated from each other):
the services will expose their public APIs themselves and you will need to put a service discovery approach in place to enable clients to call each microservice, a simple one is using the reverse proxy to forward the calls using the service name.
Each service and it's APIs scales independently
This approach is better to deploy individual updates without taking down other microservices.
This approach tends to have more code repetition to handle authorization, authentication, and other common aspects, from there you will end up doing shared libraries using on all services.
This approach increase the points of failures, it is good because failures will affect less services, if one API is failing, other services won't be impacted (if the failure does not affect the machine like memory leak or high CPU usage).
The second approach (Single API to forward the calls to right services):
You have a single endpoint and the service discovery will happen in the API, all work will be handled by each services.
The API must scale for everyone even though one service consumes much more resources than others. just the service will scale independently.
This approach, to add or modify api endpoints, you will likely update the API and the service, taking down the API will affect other services.
This approach reduces the code duplication and you can centralize many common aspects like Authorization, request throttling and so on.
This approach has less points of failures, if one microservices goes down, and a good amount of calls depend on this service, the API will handle more connection and pending requests, this will affect other services and performance. If it goes down, every services will be unavailable. Compared to the first approach, the first approach will offloaded the resilience to the proxy or to the client.
In summary,
both approaches will have a similar effort, the difference is that the effort will be split into different areas, you should evaluate both and consider which one to maintain. Don't consider just code in the comparison, because code has very little impact on the overall solution when compared with other aspects like release, monitoring, logging, security, performance.
In our current project we have a public facing API. We have several individual microservice projects for each domain. Being individual allows us to scale according to the resources each microservice use. For example we have an imaging service that consumes a lot of resources, so scaling this is easier. You also have the chance to deploy them individually and if any service fails it doesn't break the whole application.
In front of all the microservices we have an API Gateway that handles all the authentication, throttles, versioning, health checks, metrics, logging etc. We have interfaces for each microservice, and keep the Request and Response models seperately for each context. There is no business logic on this layer, and you also have the chance to aggregate responses where several services need to be called.
If you would like to ask anything about this structure please feel free to ask.

Communication pattern for MicroFrontends to MicroService backend trough single channel/websocket

we are currently facing some tough architectural questions about integrating multiple Web Components with individual backend services in a composite web UI to one smooth web application.
There are some constraints an (negotiable) design decisions:
A MicroService should serve it's own frontend (WebComponent), we would like to use HTML Imports to allow including such a WebComponent to the composite UI
A frontend WebComponent needs to be able to recieve live updates/events from it's backend MicroService
The page (sum of Web Components used in the composite UI) shall only use one connection/permanent occupied port to communicate with the backend
I made a sketch representing our abstract / non-technical requirements for further discussion:
As of my understanding, the problem could be rephrased to: How do we
a) concentrate communication on entering
b) distribute communication on exiting
the single transport path on both ends.
This two tasks need to be solved on both sides of the transport path, eg. the backend and the frontend.
For the backend, I am quite hopefull that adopting the BFF pattern as not only described by Sam Newman could serve our needs. The right half (backend) side of the above sketch could then look similar to this:
The transport path might be best served using standardized web technologies, eg. https and websocket (wss) for the most times needed, bidirectional communication. I'm keen to learn about alternatives with an equivalent high adoption rate in the web technology sector.
For the frontend we are currently lacking ideas and knowledge about previously described patterns or frameworks.
The tricky thing is, that multiple basically independent WebComponents need to find together for using the ONE central communication path. If the frontend would be realized by implementing one (big) Angular application for example, we would implement and inject a "BackendConnectorService" (Name to be discussed) and inject in to our various components.
But since we would like to use decoupled Web Components, none such background layer for shared business logic and dependency injection exists. Should we write a proprietary JS-library, which will be loaded to the window-context if not present yet from every of our components, and will be used (by convention) to communicate with the backend?
This would roughly integrate to the sketch like below:
Are we thinking/designing our application wrong?
I'm thankful for every reasonable idea or hint for a proven pattern/framework.
Also your view on the problem and the architecture might be helpful to circumnavigate the issues we are facing right now.
I would go with same approach you are using on the backend for the frontend.
Create an HTTP or WS gateway, that frontend components will poll with requests. It will connect to backend BFF and there you solved all your problems. Anytime you want to swap your components, transfer or architecture, one is not dependent on the other.

What is the role of falcor in a microservice architecture?

Say we have following taxi-hailing application that is composed of loosely coupled microservices:
The example is taken from https://www.nginx.com/blog/introduction-to-microservices/
Each services has its own rest api and all services are combined in a single api gateway. The client does not talk to a single service but to the gateway. The gateway requests information from several services and combines them to a single response. For the client it looks like it is talking to a monolithic application.
I am trying to understand: where could we incorporate falcor into this application?
One Model Everywhere from http://netflix.github.io/falcor/
Falcor lets you represent all your remote data sources as a single
domain model via a virtual JSON graph. You code the same way no matter
where the data is, whether in memory on the client or over the network
on the server.
In this taxi-hailing application each microservice represents a single domain model already. Can you think of any benefit we could thrive by wrapping each microservice with falcor? I cannot.
However I think it is very convenient to incorporate falcor into the api gateway because we can abstract away the different domain models created by the microservices into one single or at least a few models.
What is your opinion?
You are right. This is how Netflix uses Falcor and what the Falcor router is designed for.
From the documentation:
The Router is appropriate as an abstraction over a service layer or REST API. Using a Router over these types of APIs provides just enough flexibility to avoid client round-trips without introducing heavy-weight abstractions. Service-oriented architectures are common in systems that are designed for scalability. These systems typically store data in different data sources and expose them through a variety of different services. For example, Netflix uses a Router in front of its Microservice architecture.
It is rarely ideal to use a Router to directly access a single SQL Database. Applications that use a single SQL store often attempt to build one SQL Query for every server request. Routers work by splitting up requests for different sections of the JSON Graph into separate handlers and sending individual requests to services to retrieve the requested data. As a consequence, individual Router handlers rarely have sufficient context to produce a single optimized SQL query. We are currently exploring different options for supporting this type of data access pattern with Falcor in future.
Falcor is really a great api if it is used in the correct way for very relevant use cases, like :
If your page has to make multiple REST end point calls
These calls don't depend on each other
All the REST calls happens on initial page load
Performance : If you want to cache the REST responses (for example, the microservice uses gemfire caching, you may not need falcor cache. You could still use falcor caching if you want to reduce the network latency)
Server requests batching : When running Falcor in node environment, you may want to cut down the amount of calls to node server from the client side.
Easier response parsing : If you don't want the client code to worry about extracting the data-points from REST response (Including error handling)
and so on ..
However, there are plenty of situations where falcor does not serve the purpose as much and feel that it is better off calling the end point directly :
If REST calls are dependent on one another
If you want to pass lot of parameters for calling the end point
If you don't intend to cache the response(s)
If you want to share some secure cookies (ex:XSRF tokens) with the REST web service

SOA and APIs - are the APIs endpoint services or they require a distinct service?

I'm designing a RESTful Service Oriented Architecture web application to make it scale as good as possible and put different kind of services on different machines (separating resource intensive operations from other services).
I also want users to be able to access their data to make their own applications.
I'm not sure if I have to design these services to be opened to the world, so it's just a matter of make them listen on a web domain (like AWS) or create another service to handle API requests.
It makes sense to me to have secure opened webservices, but it does add a lot of complexity to the architecture itself because each service becomes a client that has to be recognized (trust) by other services in the same suite, just as well as I have to recognize 3rd party applications trying to access their own data.
Is this a right SOA approach? What I want to be sure is that I'm not mixing wrong concepts designing a wrong service oriented architecture.
All services have crud interfaces so they could be queried using REST principles.
Depending on the nature of your system, it may be viable to have unsecured webservices, so they can all talk to each other without the security overheads. To make the services available to 3rd parties, you could then use a Service Perimeter Guard as the only mechanism for accessing the services externally and apply security at this layer. This has the benefit of providing consistent security across all of your services, however if the perimeter is compromised then access to all of the services is obtained.
This approach may not be viable for all services. For instance information that is considered "personal-in-confidence" (e.g., employee data such as home addresses, emergency contact details, health data, etc), will need to be secured so that unauthorised staff cannot access it.
Regarding your comment of putting different services on different machines, this will result in under-utilised resources on some machines and possibly over-utilised resources on others. To avoid this, deploy all services to all machines and use a load-balancer. This will provide more optimal resource usage and simplify deployments (e.g., using Chef or Puppet) as all of the nodes are the same. As the resource usage increases, you can then simply add more nodes. Similarly if the resource usage is low, you can remove nodes.
Regarding your last sentence, there is a whole lot more to REST than CRUD (such as HATEOAS).

Resources