How to sync an action of many microservices and return a single response to the client that takes into consideration each microservice's response?
I'm making a social network application (like Facebook) with microservices, for learning purposes. I've divided the app into following microservices, each will have its own database:
Authentication - Login/Register, returns a JWT token.
Database stored properties are: UserName, Email, PasswordHash, PasswordSalt.
UserProfiles - Gets and Updates profiles.
Database stored properties are: UserName, FirstName, LastName, Gender, Photos
UserPosts - user can publish Posts with anything he likes, others can comment.
Database stored properties are: UserName, UserPosts, Comments
Gateway - collects http requests from clients, forwards them to correct microservices
THere will be more, like Messages between users.
Some properties in databases will duplicate (UserName, but there can be more). I suppose I cannot avoid that if I want to make the services independent.
Now, what do I do if the user decides to change a shared property, like UserName? Obviously it will require every service to update its database. But what if one of services cannot connect to the database or meets some other error? The response should be 500 Internal Server Error. I can see two options for that:
Make the Gateway send an HTTP request to each microservice, requesting an update. How do I pass information about an error in one of the services? This seems a bad approach
Publish a message (MassTransit, RabbitMQ) to all microservices with update request. This way I can await a response from each service and decide what to return to client. But who should be the publisher here? Gateway? Authentication?
Is there some other way I have not thought about? I'll be thankful for any good-practice, clean-code advices.
Thank you
I tried messaging services from Authentication services, since it's the one that creates the User entity in the first place. But it doesn't feel like a good reason to make it there.
After a little more research, I think I may have found a solution. I learnt about the Saga Orchestrator design pattern. Basically it's just a central unit of command, don't know what's all this fancy naming for. However, I will incorporate a central unit in the Gateway microservice. Then, it will still aggregate HTTP requests from clients, but instead of just redirecting them to appropriate microservices, it will send a message to each microservice involved, over AMQP protocol (with MassTransit, RabbitMQ or similar). It will await a response from each microservice and decide what to return to the client. If any service returns an error, I will request a rollback operation or retry.
To start off I would argue that such decomposition would be excessive for a real-world scenario. IMO microservices should evolve around bounded context and all the social media stuff is a single bounded context. In case you wanted to introduce another one that is totally different (i.e. payment gateway) I would say that decomposition to microservices would make some sense. Although, even in such a case I would default to a modular monolith since microservices are distributed systems and writing distributed system correctly is hard (which you might have already experienced).
Nevertheless, I recognize that the example you've brought up is solely for learning purposes. So to answer your question: most users are ready to tolerate some inconsistencies as long as they are warned about it. So the natural solution to your question is eventual consistency. In case you want to update Username you make a call to UserProfile microservice and if things are successful you return 200 response while sending UsernameUpdated message via RabbitMQ to other microservices. In such a case doing retries in case of failure would be a concern of every microservice that receives a message acknowledging the message once the update is successful.
As an alternative, you may call other microservices via RPC (HTTP) and wait for them to return a response in a synchronous fashion. But I suggest avoiding that.
Regarding API Gateway it's a nice pattern in case you want a single entry-point for your client that routes requests to different microservices. Also, you can manage additional cross-cutting concerns like authorization in your API Gateway. But beware that every additional moving part adds complexity to your system.
Related
I'm new with this but in case if we have fronted + few different microservices I just don't get why do we need any of them to communicate with each other if we can manipulate between their data via axios on frontend. What is the purpose of event bus and event-driven architecture in the case if we use both frontend and backend microservices?
Okay, for my example I'm using 5 microservices. There are 2 of them:
Shopping cart
Posts
And I want to access posts microservice directly, pass their data through the event bus, so the shopping cart microservice would have its information. The reason is that posts and shopping cart both have different data bases, so is a good example doing this that way, or just through frontend with axios service?
What you are suggesting could be true for a very simple application, which hardly even needs an architecture such as microservice. It is clear why services need communication:
some services are not even accessable (for various reasons such as security) in client, so a change in them must be initiated in other backend services with such priviledge
some changes are raised in backend services and not client, e.g. a cronjob for doing some task
it would question reusability as you must consider the service to be used by not only client, but in any environment
what would happen if you want your services to be used by public, what if they do not implement part of the needed logic intentionally or by mistake
making client do everything could be so complex and would reduce flexibility
some services such as authentication are acting as a supporting mechanism to ensure safety (or anything other than main logic), these should be communicated directly by the service needing them
As for second part of your question, it depends on several factors like your business needs & models, desired scalability, performance, availability, etc. so the right answer or in another say, the answer that fits would be different.
For your problem, using event bus which is async would not be a good solution as it would hurt consistency in your services. Instead synchronous ways like a simple API call in your posts service would be a better idea.
Let say I have 22 microservices. I developed with docker on local.
Client wants to get product model data which contains 3 different service data and aggregate them.
Should I use aggregator gateway api or SPA get separately from each service. Does Aggregator service couple services ?
These Microservices patterns always come with Trade-offs. Here you need to consider more than just a coupling issue when you are going with Aggregator pattern (Backend for Frontend).
The following are some of the points you need to think about before going with this pattern.
The Latency problem. If you want this implementation to make it better without any latency problem, then your services and aggregator should be in the same location or the same data center. Avoid third party calls from aggregators.
This can introduce a single point of failure. Make sure that you've designed in such a way that the service is highly available.
Implement a resilient design and timeout since this aggregator is calling other services and getting data. If one or more service calls take too long, it should timeout and return a partial set of data. Consider how your application will handle this scenario
Monitoring of your aggregator and it's child service calls. Implement distributed tracing using correlation IDs to track each call.
Ensure the aggregator has the adequate performance to handle the load and can be scaled to meet your anticipated growth.
These are the best practices that I can suggest, You are the best person who can decide based on your system requirements and these points.
There are some compelling advantages to using a BfF service as an orchestration layer that aggregates calls to various backend data services.
It will reduce the complexity in the data access areas of your SPA.
It can also reduce load times.
Over time, your frontend devs will be less likely to get blocked on the backend devs assuming that the BfF is maintained by the frontend devs.
Take a look at this article on Consistency, Coupling, and Complexity at the Edge that goes into more detail on this and proposes some best practices such as GraphQL vs REST.
I am Spring Boot dev.
I develop RESTful web services.
One of my colleagues developed an API and it does two things on the basis of operation type.
If opType = Set, the api sets/unsets a flag at the backend and if opType = Get, the api gets the status of the flag.
Does this not break the architecture of REST APIs?
We have Post/Put to change some data at backend, either create or update.
And we have Get, to get the status of some thing from backend.
Now, I want to opinion of better developers!
Should this be allowed, like having multiples operations with one API call, or should we create multiple apis for each of the tasks.
Also, the front end devs in my team, don’t like integrating multiple apis somehow, suggesting that more the api calls, poorer user experience, customer will have.
Is this the normal practice among app developers?
Comments requested.
GET requests in REST are not supposed to change the state of the server, these are read operations, whereas PUT/POST do modifications to the state of the server in the most general sense.
So usually you should have two endpoints GET to read the state of the flag and put/post for creating and modifying the state.
Having said that there is nothing that can technically restrict you from implementing everything in one API, such an API won't adhere to REST conventions, that's true, but from the client-server communication standpoint (HTTP based usually), it's still perfectly doable.
Sure thing, the separation to two endpoints makes the API more clear, easier to debug and maintain the code. But besides being "restful" this can be treated as an opinionated claim.
I didn't really get the argument of integrating multiple APIs - in my understanding, the effort is the same, and even more clear to front-enders, but they might have their own arguments.
I'm trying to start a little microservice application, but I'm a little bit stuck on some technicalities.
I'm trying to build an issue tracker application as an example.
It has 2 database tables, issues and comments. These will also be separate microservices, for the sake of the example.
It has to be a separate API that can be consumed by multiple types of clients e.g. mobile, web etc..
When using a monolitic approach, all the codebase is coupled together, and when making a request to let's say the REST API, I would handle for example the '/issues/19' request
to fetch the issue with the id '19' and it's corresponding comments by means of the following pseudocode.
on_request_issue(id) # handler for the route '/issues/<id>'
issue = IssuesModel.findById(id)
issue.comments = CommentsModel.findByIssueId(id)
return issue
But I'm not sure on how I should approach this with microservices. Let's say that we have microservice-issues and microservice-comments.
I could either let the client send a request to both '/issues/19' and '/comments/byissueid/19'. But that doesn't work nice in my point of view, since if we're having multiple things
we're sending alot of requests for one page.
I could also make a request to the microservice-issues and in that one also make a request to the microservice-comments, but that looks even worse to me than the above, since from what
I've read microservices should not be coupled, and this couples them pretty hard.
So then I read about API gateways, that they could/should receive a request and fan out to the other microservices but then I couldn't really figure out how to use an API gateway. Should
I write code in there for example to catch the '/issues/19' request, then fan out to both the microservice-issues and microservice-commetns, assemble the stuff and return it?
In that case, I'm feeling I'm doing the work double, won't the API gateway become a new monolith then?
Thank you for your time
API gateway sounds like what you need.
If you'll keep it simple, just to trigger internal API, it will not become your new monolith.
It will allow you do even better processing when your application grows with new microservices, or when you have to support different clients (browser, mobile apps, watch, IOT, etc)
BTW, the example you show sounds like a good exercise, in reality, for most webapps, it looks like over design. I would not break every DB call to its own microservices.
One of the motivations for breaking something to small(er) services is service autonomy, in this case the question is, when the comments service is down should you display the issue or not- if they are always coupled anyway, they probably shouldn't reside in two services, if they aren't then making two calls will let you get this decoupling
That said, you may still need an API Gateway to solve CORS issues with your client
Lastly, comments/byissueid is not a good REST interface the issueId should be a parameter /comments/?issueId=..
I am trying to create a Microservice architecture for a hobby project and I am confused about some decisions. Can you please help me as I never worked using Microservice before?
One of my requirements is that my AngularJS GUI will need to show some drop-down or List of values (example: a list of countries). This can be fetched using a Microservice REST call, but where should the values come from? Can I fetch these from my Config Server? or should it come from Database? If the latter, then should each of the Microservice have their own Database for lookup value or can it be a common one?
How would server-side validation work in this case? I mean, there will certainly be a Microservice call the GUI will make for validation but should the validation service be a common Microservice for all Use Cases/Screens or should it be one per GUI page or should the CRUD Microservice be reused for validation as well?
How do I deal with a use-case where the back-end is not a Database but a Web-service call? Will I need some local DB still to maintain some state in between these calls (especially to take care of scenario where the Web-service call fails) and finally pass on the status to GUI?
First of all, there is no single way design micro-service , one has to choose according to the use case and project requirement.
Can I keep these in a Config Server? or should it come from Database?
Again, it depends upon the use case and requirement. However, because every MS should have their own DB then you can use DB if the countries have only names. But if they have some relationship with City/State then you should use DB only.
If DB should each of the Microservice have their own DB for lookup
value or can it be a common one?
No, IMO multiple MS should not depend on a single DB.Because if the DB fails then all the MS will fail, which should not be done. Each MS should work alone with depending on other DB or MS.
should the validation service be a common microservice for all
UseCases/Screens
Same as point 2
How do I deal with a use-case where the backend is not a Database call
but another Web-service call? Will I need some local DB still to
maintain some state in between these calls and finally pass on the
status to GUI?
If you are using HTTP then you should not save the state of any request. If you want to redirect the request to another MS then you can use Feign client which provides a very good way to call rest-api and other important features like: Load balancing.
Microservice architecture is simple. Here we divide each task into separate services(like Spring-boot application).
Example in every application there will be login function,registration function so on..each of these will a separate services in micro-service architecture.
1.You can store that in database, since in feature if you want add more values it is easy to add.
You can maintain separate or single db. Single db with separate collections or table for each microservices.
Validation means you are asking about who can use which microservice(Role based access)???
3.I think you have to use local db.
Microservices is a collection loosely coupled services. For example, if you are creating an ecommerce application, user management can be a service, order management can be a service and refund & chargeback management can be another service. Now each of these services can be further divided into smaller units, lets call them API Endpoints. For example - user management can have login as an endpoint and signup as another endpoint.
If you want to leverage the power of Microservice architecture in its true sense, here is what I would suggest. For the above example, create 3 Springboot Applications for each service. First thing that you should do after this, is establish trust between those applications. I would prefer JWTs for trust establishment. After that everything is a piece of cake. Here are the answers you are looking for :
You should ideally use a database, as opposed to keeping the values in config server, for fetching a list of countries so that you need not recompile your code every time a new country is added.
You can easily restrict access using #PreAuthorize if Role based access is what you are referring to.
You can use OkHttp or any other HttpClient in this usecase. And you certainly need not maintain any local db. However, you can cache the output of the webservice call if that is a requirement.
P.S.: Establishing trust between microservices can be a complex task if you dont understand all the delicacies. In which case, I would recommend going ahead with a single Springboot application; which is a monolithic architecture. I would still recommend JWTs though.