Event-sourcing: Dealing with derived data - event-sourcing

How does an event-sourcing system deal with derived data? All the examples I've read on event-sourcing demonstrate services reacting to fact events. A popular example seems to be:
Bank Account System
Events
Funds deposited
Funds withdrawn
Services
Balance Service
They then show how the Balance service can, at any point, derive a state (I.e. balance) from the events. That makes sense; those events are facts. There's no question that they happened - they are external to the system.
However, how do we deal with data calculated BY the system?
E.g.
Overdrawn service:
A services which is responsible for monitoring the balance and performing some action when it goes below zero.
Does the event-sourcing approach dictate how we should use (or not use) derived data? I.e. The balance. Perhaps one of the following?
1) Use: [Funds Withdrawn event] + [Balance service query]
Listen for the "Funds withdrawn" event and then ask the Balance service for the current balance.
2) Use: [Balance changed event]
Get the balance service to throw a "Balance changed" event containing the current balance. Presumably this isn't a "fact" as it's not external to the system, therefore prone to miscalculation.
3) Use: [Funds withdrawn event] + [Funds deposited event]
We could just skip the Balance service and have each service maintain its own balance directly from the facts. ...though that would result in each service having its own (potentially different) version of the balance.

A services which is responsible for monitoring the balance and performing some action when it goes below zero.
Executive summary: the way this is handled in event sourced systems is not actually all that different from the alternatives.
Stepping back a second - the advantage of having a domain model is to ensure that all proposed changes satisfy the business rules. Borrowing from the CQRS language: we send command messages to a command handler. The handler loads the state of the model, and tries to apply the command. If the command is allowed, the changes to the state of the domain model is updated and saved.
After persisting the state of the model, the command handler can query that state to determine if their are outstanding actions to be performed. Udi Dahan describes this in detail in his talk on Reliable messaging.
So the most straight forward way to describe your service is one that updates the model each time the account balance changes, and sets the "account overdrawn" flag if the balance is negative. After the model is saved, we schedule any actions related to that state.
Part of the justification for event sourcing is that the state of the domain model is derivable from the history. Which is to say, when we are trying to determine if the model allows a command, we load the history, and compute from the history the current state, and then use that state to determine whether the command is permitted.
What this means, in practice, is that we can write an AccountOverdrawn event at the same time that we write the AccountDebited event.
That AccountDebited event can be subscribed to - Pub/Sub. The typical handling is that the new events get published after they are successfully written to the book of record. An event listener subscribing to the events coming out of the domain model observes the event, and schedules the command to be run.
Digression: typically, we'll want at-least-once execution of these activities. That means keeping track of acknowledgements.
Therefore, the event handler is also a thing with state. It doesn't have any business state in it, and certainly no rules that would allow it to reject events. What it does track is which events it has seen, and which actions need to be scheduled. The rules for loading this event handler (more commonly called a process manager) are just like those of the domain model - load events from the book of record to obtain the current state, then see if the event being handled changes anything.
So it is really subscribing to two events - the AccountDebited event, and whatever event returns from the activity to acknowledge that it has completed.
This same mechanic can be used to update the domain model in response to events from elsewhere.
Example: suppose we get a FundsWithdrawn event from an ATM, and we need to update the account history to match it. So our event handler gets loaded, updates itself, and schedules a RecordATMWithdrawal command to be run. When the command loads, it loads the account, updates the balances, and writes out the AccountCredited and AccountOverdrawn events as before. The event handler sees these events, loads the correct state process state based on the meta data, and updates the state of the process.
In CQRS terms, this is all taking place in the "write models"; these processes are all about updating the book of record.
The balance query itself is easy - we already showed that the balance can be derived from the history of the domain model, and that's just how your balance service is expected to do it.
To sum up; at any given time you can load the history of the domain model, to query its state, and you can load up the history of the event processor, to determine what work has yet to be acknowledged.

Event sourcing is an evolving discipline with a bunch of diverse practices, practitioners and charismatic people. You can't expect them to provide you with some very consistent modelling technique for all scenarios like you described. Each one of those scenarios has it's pros and cons and you specified some of them. Also it may vary dramatically from one project to another, because business requirements (evolutionary pressures of the market) will be different.
If you are working on some mission-critical system and you want to have very consistent balance all the time - it's better to use RDBMS and ACID transactions.
If you need maximum speed and you are okay with eventually consistent states and not very anxious about precision of your balances (some events may be missing here and there for bunch of reasons) then you can derive your projections for balances from events asynchronously.
In both scenarios you can use event sourcing, but you don't necessarily have to generate your projections asynchronously. It's okay to generate projection in the same transaction scope as you making changes to your write model if you really need to do that.
Will it make Greg Young happy? I have no idea, but who cares about such things if your balances one day may go out of sync in mission-critical system ...

Related

saga pattern: what about if compensation action fails

We're trying to understand how to compensate a "saga compensation failure".
We have two microservices, and two databases, one per microservice.
Customer microservice
Contract microservice
Use case: Customer alias modification.
Request is sent to "Customer microservice".
a. Customer alias is modified on customer table, but its state is pending.
b. A customer modified event is sent.
customer modified event is received by "Constract microservice".
a. Received Customer is updated on all contracts (we're using mongodb), since customer information is embedded in each contract.
b. A contract updated event is sent.
contract updated event is received by "Customer microservice".
a. Customer's state is set to confirmed.
If 3.a fails a compensation action is performed, but what about if it fails?
This can be handle with combination of below approaches:
Implement Retry pattern for Compensate Action
Exception handling - exception can be save and, this can be resolve through - Automated process like Retry mechanism through separate application
This is extension of approach#2, If Automated process unable to resolve, generate exception report which can be review manually and action can be taken based on the issue.
It looks like you are using the term saga but you really mean you want a transaction. If you really need a transaction do that (you can look at solutions like https://docs.temporal.io/ for providing that)
[personally I think transactions between services are bad, and if I need transaction between services, I try to rethink my design but your milage may vary]
You didn't specify the reason on why contracts would reject the change - if there are business rules that one thing but if these are "technical reasons" like availability etc. then the thing to do is to make sure the event is persistent and was sent (e.g. like outbox pattern on the sending side) and have the consuming service(s) handle it when it can
If there are business rules involved then maybe it is a bad example but I'd expect a person can still change their alias regardless and the compensation would be keeping some of the contracts with the old alias or something a long these lines.
by the way, it seems you have a design issue that causes needless temporal coupling between your services.
If the alias is important in contracts but owned by the customers service, the alias stored in the contracts should be considered as cached.
In this case the customers service can close the update regardless of what other services do. it can fire the event and you can complete the process when you can on the contracts service. when a contract is read you can check if there's a newer version of the customer and if so get it. you may also (depending on the business reqs. specify that the data is correct as of the last update)
BASE VS ACID :
ISOLATION: As local transactions are committed while the Saga is running, their changes are already visible to other concurrent transactions, despite the possibility that the Saga will fail eventually, causing all previously applied transactions to be compensated. I.e., from the perspective of the overall Saga, the isolation level is comparable to “read uncommitted.”
Eventually other services will read those inconsistent events, they will also take wrong decisions according to these, they will increase the number of events which should not be happen at all.
In the end there will be tons of events to rollback (how is that possible if your system let users to do more than allowed in real world ? Can you get back an ice cream from a kid which is sold 5 minute ago !)

DDD dealing with Eventual consistency for multiple aggregates inside a bounded context with NoSQL

I am currently working on a DDD Geolocation application that has two separate aggregate roots inside one bounded context. Due to frequent coordinate updates I am using redis to persist my data which doesn't allow rollbacks.
My first aggregate root is a trip object containing driver (users), passengers (list of users), etc.
My second aggregate root is user position updates
When a coordinate update is sent I will generate and fire a "UpdateUserPostionEvent". As a side effect I will also generate and fire a "UpdateTripEvent" at a certain point, which will update coordinates of drivers/passengers.
My question is how can I deal with eventual consistency if I am firing my "UpdateLiveTripEvent" asynchronously. My UpdateLiveTripEventHandler has several points of failure and besides logging an error how can I deal with this inconsistency?
I am using a library called MediatR and the INotificationHandler which is as far as I know is "Fire and Forget"
Edit: Ended up finding this SO post that describes exactly what I need (saga/process manager) but unfortunately I am unable to find any kind of Saga implentation for handling events within the same BC. All examples I am seeing involve a sevice bus.
Same or different Bounded Context; with or without Sagas; it does not matter.
Why a event handling fail? Domain rules or Infrastructure.
Domain rules:
A raised event handled by an aggregate (the event handler use the aggregate to apply the event) should NEVER fail by Domain Rules.
If the "target" aggregate has Domain Rules that reject the event your aggregate design is wrong. Commands/Operations can be rejected by Domain rules. Events can not be rejected (nor Undo) by Domain rules.
A event should be raised when all domain rules to this operation was checked by the "origin" aggregate. The "target" aggregate apply the event and maybe raises another event with some values calculated by the "target" aggregate (domain rules, but not for reject the event; events are unrejectable by domain rules; but to "continue" the consistency "chain" with good responsibility segregation). That is the reason why events should have sentences in past as names; because already happened.
Event simulation:
Agg1: Hey buddies! User did this cool thing and everything seems to be OK. --> UserDidThisCoolThingEvent
Agg2: Woha, that is awesome! I'm gonna put +3 in User points. --> UserReceivedSomePointsEvent
Agg3: +3 points to this user? The user just reach 100 points. That is a lot! I'm gonna to convert this User into VIP User. --> UserTurnedIntoVIPEvent
Agg4: A new VIP User? Let's notify it to the rest of the Users to create some envy ;)
Infrastructure:
Fix it and apply the event. ;) Even "by hand" if needed once your persistence engine, network and/or machine is up again.
Automatic retries for short time fails. ErrorQueues/Logs to not loose your events (and apply it later) in a long time outage.
Event sourcing also helps with this because you can always reapply the persisted events in the "target" aggegate without extra effort to keep events somewhere (i.e. event logs) because your domain persistence is also your event store.

Event sourcing: splitting event in more detailed

While user registration process in my domain several actions occur: user created (with email/password or with linked social network account), user login is done.
I have (see) two options how to register the events:
One UserRegistred event (which contains all the info, password hashes, external social accounts)
Multiple events UserCreated, UserPasswordSet, UserExternalAccountLinked, UserLoggedIn
Events from second option (UserPasswordSet, UserExternalAccountLinked, UserLoggedIn) may appear on their own later while performing corresponded operations.
I understand that question and options may be subjective, but I would like hear opinions of experienced ES/DDD users about the issue.
I don't claim to be experienced, but I think it's simpler output multiple events rather than having a complex simple event.
The pros are:
Simplicity - projections (including the aggregate itself) and other event handlers don't need to understand a complex UserRegistered event as well as the fine grained events
Less churn on the event schemas - e.g. if you change details of your authentication events, fewer event types will need to change (since there's no UserRegistered event to change)
Clarity - the events better capture the sequence of state changes involved in user registration
I can think of a minor con:
Non-atomic registration. It's likely projections could handle a single user registered event and atomically create the read model in a state that the client can immediately query. If you have multiple events, the read model might handle them one by one, meaning the user may be temporarily in a half-registered state, that you might not want to handle in your clients.
This can be avoided by having your read projection consume all available events and make its update in a single transaction, so that the sequence of events causes only a single transaction commit, and hence you never see a half-registered user. This is more efficient in any case, but might not be that simple, depending on your read store.
Alternatively, you can automatically filter out half-registered users when querying the service

Remote persistent views with Lagom

In a classical microservice architecture, you have relevant domain events published on some messaging system which allows other parts of the system to react.
Now imagine you have three microservices: Customers, Orders and Recommendation. The Recommendation microservice needs information from Customers and Orders to provide its functionality, such as the list of all customers and all the orders, which is going to be analyzed from some machine learning algorithm. Now, you need to have the state of Customers "join" Orders on the Recommandation microservice:
You have the Recommandation microservice listen to domain events published by Customers and Orders and built its own state. This leads to logic duplication since you probably have that same logic inside Customers and Orders already
On each relevant domain message from Customers and Orders, you just go to them and ask the state of a specific customer or order. This works fine, however if you have N services rather than just one which needs to build a materialized view, you will cause a big load on Customers and Orders
You get Customers and Orders themselves publish "heavy-weight" events (not domain events) that allows any other microservice to build a materialized view without processing domain events. This allows you both a) not to duplicate the logic b) not to keep asking the same information
Has pattern n.3 some drawbacks we couldn't figure out and if not, how do you implement it in Lagom?
I will try to explain a few more bits in the hope to give you some more perspective on that matter and how you can achieve it in a reliable way in Lagom.
We have a few concepts that we must keep in mind. The most important one which is the source of all is Event Sourcing itself. Event Sourcing means that any State in the system has its source in Events.
The first State that we will deal with is the State of the PersistentEntity. This State is prominent because, together with the Command and Event Handler, it defines the consistency boundary of your model.
But there other States in the system. Actually, we can create as much as we want because we have the Event Journal. A read-model is also a State and it’s also generated from the events.
There are many reasons why you shouldn’t publish the State of the PersistentEntity to other systems. The first one being a matter of avoiding coupling. You don’t want your data to leak to other services. That’s all about having an anti-corruption layer (ACL).
So, from here we could say: before publishing Order and Customer to Recommendation Service, I will transform it to OrderView and CustomerView (ACL 101).
The question now is when will you do it? If you try to publish it in Kafka after you have handled a command, you don’t have any guarantee that the State will be published. There are no XA transactions between the event journal and the Kafka topic. So, there is a chance that the events are persisted, but for some reason, the State is not published in Kafka.
If you want data to get out of a service in a reliable way and without creating coupling between services, you have the following options:
Use the broker API and publish the events to a topic. You should not publish the events as they are, but transform them into the format of your external API (ACL).
Use a read-side processor to generate a view of it, again the external API format you want to make available. If you want, you can publish that ViewState to a topic so other services can consume it directly.
That said, there is nothing wrong in publishing something in a topic that is not a real event, but some derived State. The problem is how you can guarantee that it is effectively published. Doing that from inside the PersistentEntity is risky because you have at-most-once semantics. The most reliable way of doing it is a read-side process that gives you at-least-once semantics.
Further comments inline...
Listen to domain events from customer and orders and rebuild the state
in the recommandation service. This is a horrible idea because you
would need to duplicate the logic that handles events across different
bounded context
That's not a horrible idea. That's how you make your services independent from each other. The logic that you will need to implement to consume the events are not the same. As you said, it's a different bounded context, as such it only gets what it needs.
Leaking the State from a BC to another is more problematic for the reasons I mentioned above (anti-corruption layer).
To achieve decoupling you do need more coding and there is nothing wrong with that. At the end of the day, the reason for building microservices is to avoid coupling and be able to let the services evolve and scale without interfering with each other. There is a price to pay for that and the price is to write more code. You need to evaluate the thread-offs.
You can consume your own events, produce an OrderView and CustomerView and publish into Kafka, but that's the same as consuming the events directly on the Recommendation Service.
Note that you also need to store OrderView and CustomerView somewhere in the Recommendation Service. So you end up storing it three times. On the original service (view table), in Kafka and in the Recommendation Services.
That's why publishing events in a topic is the best option to propagate data between services.
Every time we receive a domain event from customers or orders, go to
them and ask them the state. This is horrible because if you have more
than one microservice that needs their state, you will end up
producing load on customers and orders
That is indeed a horrible idea because you will make the Recommendation Service be dependent on the other two services. If Order or Customer is down, the Recommendation will be down as well. That's what a broker helps to solve.
Have customers and orders not only publish events but also state and
having all the services that need to build materialized views listen
the state they need How do you apply the last pattern with Lagom? We
found no way to listen to state changes, just to events. One solution
we considered implied publishing with pubSub the state in the onEvent
handler of a persistent entity but I am not sure this is the right
place to make it happen.
Using pubSub in the onEvent handler is the worst solution of all. For the following reasons:
pubSub has at-most-once sematincs (see comments above)
Event handlers are called many times. Whenever you re-hydrate an Entity, the events are replayed and the the event handlers will be used for that. Which mean that you will re-publish the state each time. Actually, you would solve the at-most-once pubSub problem, but not the way you might expect/desire.
You could use the afterPersist callback for that, but that's not reliable neither because pubSub is at-most-once.
PubSub inside a PersistentEntity should not be used for something that you need to be reliable. It's a best-effort capability, that's all.

If nobody needs reliable messaging on transport level, how to implement reliable PubSub on business level?

This question is mostly out of curiosity. I read this article about WS-ReliableMessaging by Marc de Graauw some time ago and agreed that reliable messaging should be applied on the business level as whenever possible.
Now, the question is, he explains clearly what his approach is in a point-to-point fashion. However, I fail to see how you could implement reliable messaging on the business level in a Publish/Subscribe situation.
I will try to demonstrate the difference by showing commands (point-to-point) vs. events (publish/subscribe). Note that these examples are highly simplified.
Command: Transfer(uniqueId, amount, sourceAccount, recipientAccount)
If the account holder sends this transfer, he could wait for the confirmation MoneyTransferred (assuming this event will contain a reference to the uniqueId in the Transfer command.
If the account holder doesn't received the MoneyTransferred within a given timeout period, he could send the same command again. (of course assuming the command processor is idempotent)
So I see how reliable messaging could work on business level in a point-to-point fashion.
Now, say we the previous command succeeded and produced a MoneyTransferred event. Somewhere in the system we have an event processor (MoneyTransferEmailNotifier) that handles MoneyTransferred events and will send an email notification to the recipient of the transfer.
This MoneyTransferEmailNotifier is subscribed to MoneyTransferred events. But note that system sending the MoneyTransferred event does not really care who or how many listeners there are to this event. The whole point is the decoupling here. I raise an event and don't care if there zero or 20 listeners that subscribe to this event.
At this point, if there is no reliable messaging (minimally at-least-once-delivery) provided by the infrastructure, how can we prevent the loss of the MoneyTransferred event? I do want the recipient to get his e-mail notification.
I fail to see how any real 'business-level' solution will resolve this.
(1) One of the solutions I can think of is by explicitly subscribing to events on 'business level' and thereby bypassing any infrastructure component. But aren't we at that moment introducing infrastructure in our business?
(2) The other 'solution' would be by introducing a process manager that does something like this:
PM receives Transfer command
PM forwards Transfer command to the accounts subsystem
If successful, sends command SendEmailNotification(recipient) to the notification subsystem
This does seem to be the solution that DDD prescribes, correct? But doesn't this introduce more coupling?
What do you think?
Edit 2016-04-16
Maybe the root question is a little bit more simplistic: If you do not have an infrastructural component that ensures at-least or exactly-once delivery, how can you ensure (when you're in an at-most-once infrastructure) that your events emitted will be received?
Not all events need to be delivered but there are many that are key (like the example of sending the confirmation email)
This MoneyTransferEmailNotifier is subscribed to MoneyTransferred events. But note that system sending the MoneyTransferred event does not really care who or how many listeners there are to this event. The whole point is the decoupling here. I raise an event and don't care if there zero or 20 listeners that subscribe to this event.
Your tangle, I believe, is here - that only the publish subscribe middleware can deliver events to where they need to go.
Greg Young covers this in his talk on polyglot data (slides).
Summarizing: the pub/sub middleware is in the way. A pull based model, where consumers retrieve data from the durable event store gives you a reliable way to retrieve the messages from the store. So you pull the data from the store, and then use the business level data to recognize previous work as before.
For instance, upon retrieving the MoneyTransferred event with its business data, the process manager looks around for an EmailSent event with matching business data. If the second event is found, the process manager knows that at least one copy of the email was successfully delivered, and no more work need be done.
The push based models (pub/sub, UDP multicast) become latency optimizations -- the arrival of the push message tells the subscriber to pull earlier than it normally would.
In the extreme push case, you pack into the pushed message enough information that the subscriber(s) can act upon it immediately, and trust that the idempotent handling of the message will prevent problems when the redundant copy of the message arrives on the slower channel.
If nobody needs reliable messaging on transport level, how to implement reliable PubSub on business level?
The original article does not state that "nobody needs reliable messaging on transport level", it states that the ordering of messages should be enforced at the business level because, in some cases, if this ordering is an important characteristic of the business.
In any case, PubSub is at the infrastructure level, you can't say that you implement PubSub at the business level. It doesn't make sense.
But then how you could ensure only-once-delivery at the business level? By using a Saga/Process manager. On of the important responsibilities of them is exactly that. You can combine that with idempotent Aggregates. Also, you could identify terms that emphasis ordering from the Ubiquitous language like transaction phase and include them in your domain models (for example as properties of the events).
If you do not have an infrastructural component that ensures at-least
or exactly-once delivery, how can you ensure (when you're in an
at-most-once infrastructure) that your events emitted will be
received?
If you do not have at-least-once then you could use the first event that it is initiating the hole process. I would use event polling and a Saga that ensure that every important step in the process is reached at the right moment.
In your case, as the sending of the email is an important business aspect, I would include it as a step in the process.

Resources