Does each microservice's local transaction affect database immediately in saga pattern? - microservices

is each microservice's local transaction persistent in Saga pattern. For example, microservice A completed its transaction and sent an event to microservice B and are microservice A's operations affect database immediately which results with inconsistent state or do they affect after last microservice's OK event?

The point of the Saga pattern is to work with no blocking. If the state of the local transaction of A would be driven by the last microservice outcome (i.e. by B) then the A would be waiting for the result of B. Then the A would be blocked in processing in some way.
For the saga pattern the local transaction is applied immediately when the microservice finishes its work. If failure happens the saga then ensures the A is informed about that outcome and A is responsible to compensate the work being done. That could be for example deleting the persistent record. The microservice A could for example persist the outcome of the local transaction with some flag (like "saga in progress") and does not consider such item until the saga is finished as whole (then switching the flag to something like "finished").

Related

Saga Compensating Transaction

Currently working on initial phase of Microservice Architecture for product. It is evident that for many operation it required to have distributed transaction or say there are couple of operation required across different microservice in-order to complete one business process.
For this purpose I found that Saga is useful. Now for ideal case or where everything goes correct then it works fine but when something is not correct or some activity failed at that moment we may have to rollback those operation. For this there is something called "Compensating" transaction or operation required. Now It is completely possible when operation performed specially for successful operation, it may possible that other transaction also performed on that service so db might be in different state then when actually operation performed.
What will be the solution for this ? One solution I think is that somehow state needs to preserve so it can revisit but like for stock may be change to some other transaction so I feel that compensating transaction would be a problem.

Database base failure in microservice architecture

Suppose we are in a microservice architecture with
2 microservices with API interfaces for synchronous calls
1 RDBMS with 1 DB per microservices
1 queue system for aysnchronous calls
User A make a request to an endpoint of microservice 1 using its API.
The endpoint task is for exemple calculate something and then put the result in a table of the microservice DB.
How to handle failure of the database during the request of the user ?
Example:
During the request, the database crashes.
What to do then ?
Return an error ? But what's error ? 500 ?
But isn't the microservice archicture supposed to avoid this type of coupling ?
Shall we make the system more loosely coupled ?
Shall the microservice save the data in a local file or queue and retry to insert in db ?
But what about user ? It will be impossible for him to retrieve/updated the data it just create, appart the system return the result from the local data.... but it's very complex no ?
How can we achieve that...
I got the same doubt with the use of queue systems.
In a event driven design, we have along the microservice a consumer and producer.
The consumer listen to topic in the event bus then can insert data in its db.
The producer is called when action is triggered on DB insert for its own data and send it to event bus on a topic.
Imagine the event bus crashes....
If so, the consumer will crash too in the microservice.
If db insert occur, the producer could not emit the event to the event bus....
So data is lost ?
So shall the producer keep data in its local storage for retrying ?
I have returned this question many time in my head and I didn't have a resilient system.
Return an error ? But what's error ? 500 ?
Yes return an error. There is nothing wrong in returning an Error from your micro-service. Usually "500 Internal Server Error" is the right error in the case you have a failure in your database. This is a standard behavior of an Rest API.
But isn't the microservice architecture supposed to avoid this type of
coupling ? Shall we make the system more loosely coupled ?
I think there is a confusion here. A micro-service communication with its own database is not considered as coupling. Micro-service-A which is using its won database micro-service-A-database is considered as one logical unit or vertical. The micro-service-A would not be much of a use without its database and vise versa. This is totally ok and you can look at in similar way as with standard WebApplication with its Frontend, Backend(similar as your service) and Database. Coupling should be avoided across different micro-services. For example micro-service-A should not be tightly couple with with micro-service-B or micro-service-C. Each micro-service should be atomic and as independent as possible but not from its database, cache or similar. You can consider the database as logical part of it.
Shall the micro-service save the data in a local file or queue and
retry to insert in db ?
No it is expected that the database could fail or at least you have to deal with the option as well. From the User prospective you would just return the Error Code 500. At least for most cases this would be the expected behavior. There are some special cases where you want at any cost to save the data and not lose it for that request(there are ways to deal with this as well).
But what about user ?
If it is a standard Web user then he would retry a couple of times and if the problem persists probably come later and try again(there is nothing wrong in returning the Error 500 here). If by user you mean another micro-service is doing the call then that caller-micro-service has to expect that failure can happen. Now it depends what you are doing here? Example consider micro-service-A is calling micro-service-B with an Http request:
Get Call: here you can build a retry policy in micro-service-A and if micro-service-B responds with 500 after retries you can return an error to the user who called micro-service-B.
Post/Put/Patch Call: here you can also try similar as with the Get Calls but only if 1 service is involved. If you have micro-service-A calling micro-service-B and then micro-service-C and if one call was successfully(which saved some data) and another one failed you have to consider Sagas(if the operation should be transactional).
Imagine the event bus crashes.... If so, the consumer will crash too
in the microservice.
If your local micro-service database crushes then all the other channels should crash as well. If you can not save your entity in your local micro-service db why would you go further with the operation like publishing a message to a queue? The source of truth of your data/entities is the database(at least in most cases). So if the database fails you should throw an exception and return an error to the caller/user.
If db insert occur, the producer could not emit the event to the event
bus.... So data is lost ? So shall the producer keep data in its local
storage for retrying ?
So in the case where you save your data/entity in the database and the queue is not available you could simply save the Message/Event to a table in the DB and then publish the Message/Event when the queue is up and running again. Actually this is a very common pattern in this situations. For example your implementation could be Transactional:
Save entity to its table and
Save event/message to Event table. This way if one fails the
operation will be rolled back.
You can have a background worker(depending on the tech you are using for your Back-end) to publish messages into the queue in async way.

How to rollback distributed transactions?

I have three different Spring boot Projects with separated databases e.g account-rest, payment-rest, gateway-rest.
account-rest : create a new account
payment-rest : create a new payment
gateway-rest : calls other endpoints
at gateway-rest there is an endpoint which calls the other two endpoints.
#GetMapping("/gateway-api")
#org.springframework.transaction.annotation.Transactional(rollbackFor = RuntimeException.class)
public String getApi()
{
String accountId = restTemplate.getForObject("http://localhost:8686/account", String.class);
restTemplate.getForObject("http://localhost:8585/payment?accid="+accountId, String.class);
throw new RuntimeException("rollback everything");
}
I want to rollback transactions and revert everything when I throw exception at gateway or anyother endpoints.
How can I do that ?
It is impossible rollback external dependencies accessible via rest or something like that.
The only think that you can do is compensate errors, you can use pattern like SAGA
I hope that is can help you
You are basically doing dual persistence. That's not ideally a good thing because of 2 reasons
It increases the latency and thus have a direct impact on user experience
What if one of them fails?
As the other answer pointed out SAGA pattern is an option to post compensation transaction.
The other option and it's better to go with this by all means is to avoid dual persistence by writing to only one service synchronously and then use Change Data Capture (CDC) to asynchronously upate the other service. If we can design in this way, we can ensure atomicity (all or nothing) and thus probably the rollback scenario itself will not surface.
Refer to these two answers also, if they help:
https://stackoverflow.com/a/54676222/1235935
https://stackoverflow.com/a/54527066/1235935
By all means avoid distributed transactions or 2-phase commit. It's not a good solution and creates lot of operational overhead, locking etc. when the transaction co-ordinator fails after prepare phase and before commit phase. Worse things happen when transaction co-ordinator gets its data corrupted.
For that purpose you need external transaction management system. It will handle distributed transations and commit/rollback when its finished on all services.
Possible flow example:
Request coming
gateway-rest starts a distributed transaction and local transaction and sends a request(with transaction id) to payment-rest. Thread with transaction lives until all local transactions is finished.
payment-rest knows about global transaction and starts own local transaction.
When all local transactions marked as commited, TM(transaction manager) sends a request to each service to close local transactions and close global transaction.
In your case you can use sagas as mentioned by many others, but they require events and async in nature.
if you want a sync kind of API. you can do something similar to this:
first lets take an example in case of amazon, for creating a order and getting balance out of your wallet and completing the order:
create Order in PendingState
reserveBalance in Account service for order id
if balance reserved change Order state to Confirmed (also having the transaction id for the reserve) and update reserveBalanceConsumed to Account Service
else change Order state to Cancelled with reason , "not enough Balance"
Now there are cases where lets says account service balance is reserved but for some reason order is either not confirmed.
Then somebody could periodically check that if there are reserve Balance for some order and time>30 min let say then check whether that order is marked as confirmed with that trnasaction id , call reserveBalanceConsumed , else cancel that order with reason "some error please try again" , mark balance as free
NOW THESE TYPE OF SYSTEMS ARE COMPLEX TO BUILD. Use the Saga pattern in general for simpler structure.

Saga Choreography implementation problems

I am designing and developing a microservice platform based on the specifications of http://microservices.io/
The entire framework integrates through socket thus removing the overhead of multiple HTTP requests (like most REST APIs).
A service registry host receives the registry of multiple microservice hosts, each microservice is responsible for a domain of the business. Another host we call a router (or API gateway) is responsible for exposing the microservices for consumption by third parties.
We will use the structure of Sagas (in choreography style) to distribute the requisitions, so we have some doubts:
Should a microservice issue the event in any process manager or should it be passed directly to the next microservice responsible for the chain of events? (the same logic applies to rollback)
Who should know how to build the Saga chain of events? The first microservice that receives a certain work or the router?
If an event needs to pass a very large volume of data to the next Saga event, how is this done in terms of the request structure? Is it divided into multiple Sagas for example (as a result pagination type)?
I think the main point is that in this router and microservice structure, who is responsible for building the Sagas and propagating their events.
The article Patterns for Microservices — Sync vs. Async does a great job defining many of the terms used here and has animated gifs demonstrating sync vs. async and orchestrated vs. choreographed as well as hybrid setups.
I know the OP answered his own question for his use case, but I want to try and address the questions raised a bit more generally in lieu of the linked article.
Should a microservice issue the event in any process manager or should it be passed directly to the next microservice responsible for the chain of events?
To use a more general term, a process manager is an orchestrator. A concrete implementation of this may involve a stateful actor that orchestrates a workflow, keeping track of the progress in some way. Since a saga is workflow itself (composed of both forward and compensating actions), it would be the job of the process manager to keep track of the state the saga until completion (success or failure). This typically involves the actor sending synchronous* calls to services waiting for some result before going to the next step. Parallel operations can of course be introduced and what not, but the point is that this actor dictates the progression of the saga.
This is fundamentally different from the choreography model. With this model there is no central actor keeping track of the state of a saga, but rather the saga progresses implicitly via the events that each step emits. Arguably, this is a more pure case of an event-driven model since there is no coordination.
That said, the challenge with this model is observing the state at any given point in time. With the orchestration model above, in theory, each actor could be queried for the state of the saga. In this choreographed model, we don't have this luxury, so in practice a correlation ID is added to every message corresponding to (in this case) a saga. If the messages are queryable in some way (the event bus supports it or through some other storage means), then the messages corresponding to a saga could be queried and the saga state could be reconstructed.. (effectively an event sourced modeled).
Who should know how to build the Saga chain of events? The first microservice that receives a certain work or the router?
This is an interesting question by itself and one that I have been thinking about quite a lot. The easiest and default answer would be.. hard code the saga plans and map them to the incoming message types. E.g. message A triggers plan X, message B triggers plan Y, etc.
However, I have been thinking about what a control plane might look like that manages these plans and provides the mechanism for pushing changes dynamically to message handlers and/or orchestrators dynamically. The two specific use cases in mind are changes in authorization policies or dynamically adding new steps to a plan.
If an event needs to pass a very large volume of data to the next Saga event, how is this done in terms of the request structure? Is it divided into multiple Sagas for example (as a result pagination type)?
The way I have approached this is to include references to the large data if these are objects such as a file or something. For data that are inherently streams themselves, a parallel channel could be referenced that a consumer could read from once it receives the message. I think the important distinction here is to decouple thinking about the messages driving the workflow from where the data is physically materialized which depends on the data representation.
For microservices, every microservice should be responsible for its domain business.
Should a microservice issue the event in any process manager or should it be passed directly to the next microservice responsible for the chain of events? (the same logic applies to rollback)
All events are not passed to the next microservice, but are published, then all microservices interested in the events should subscribe to them.
If there is rollback, you should consider orchestration.
Who should know how to build the Saga chain of events? The first microservice that receives a certain work or the router?
The microservice who publish the event will certainly know how to build it. There are no chain of events, because every microservice interested in the event will subscribe it separately.
If an event needs to pass a very large volume of data to the next Saga event, how is this done in terms of the request structure? Is it divided into multiple Sagas for example (as a result pagination type)?
Only publish the data others may be interested, not all. In most cases, the data are not large, and message queue can handle them efficiently

CQRS + Microservices Handling event rollback

We are using microservices, cqrs, event store using nodejs cqrs-domain, everything works like a charm and the typical flow goes like:
REST->2. Service->3. Command validation->4. Command->5. aggregate->6. event->7. eventstore(transactional Data)->8. returns aggregate with aggregate ID-> 9. store in microservice local DB(essentially the read DB)-> 10. Publish Event to the Queue
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
Any suggestions would be highly appreciated.
The problem with the flow above is that since the transactional data save i.e. persistence to the event store and storage to the microservice's read data happen in a different transaction context if there is any failure at step 9 how should i handle the event which has already been propagated to the event store and the aggregate which has already been updated?
You retry it later.
The "book of record" is the event store. The downstream views (the "published events", the read models) are derived from the book of record. They are typically behind the book of record in time (eventual consistency) and are not typically synchronized with each other.
So you might have, at some point in time, 105 events written to the book of record, but only 100 published to the queue, and a representation in your service database constructed from only 98.
Updating a view is typically done in one of two ways. You can, of course, start with a brand new representation and replay all of the events into it as part of each update. Alternatively, you track in the metadata of the view how far along in the event history you have already gotten, and use that information to determine where the next read of the event history begins.
Inside your event store, you could track whether read-side replication was successful.
As soon as step 9 suceeds, you can flag the event as 'replicated'.
That way, you could introduce a component watching for unreplicated events and trigger step 9. You could also track whether the replication failed multiple times.
Updating the read-side (step 9) and flagigng an event as replicated should happen consistently. You could use a saga pattern here.
I think i have now understood it to a better extent.
The Aggregate would still be created, answer is that all the validations for any type of consistency should happen before my aggregate is constructed, it is in case of a failure beyond the purview of the code that a failure exists while updating the read side DB of the microservice which needs to be handled.
So in an ideal case aggregate would be created however the event associated would remain as undispatched unless all the read dependencies are updated, if not it remains as undispatched and that can be handled seperately.
The Event Store will still have all the event and the eventual consistency this way is maintained as is.

Resources