Is it acceptable to have an invalid state in eventsourcing after event upgrading and before patching? - event-sourcing

Let say I have a stream of persisted events that build a valid state according to some "schema" I have defined.
I change the schema and the events are upgraded to reflect this.
However, some state could not be made valid just by upgrading events, I also needed to add more events to patch the state to make it fully valid.
Firstly, is this reasoning at all valid in terms of event sourcing?
If so, how do I handle cases where a specific version of a state no longer becomes valid? I mean is this acceptable? Should it still be possible to rehydrate a version with invalid state? If this is a write model and it's not the latest version, I could not modify this state anyway so maybee it's no big deal?

However, some state could not be made valid just by upgrading events, I also needed to add more events to patch the state to make it fully valid.
"Compensating events" is the usual term; there is a clerical error in the book of record, so we need to add a new event to the history that corrects the mistake.
If so, how do I handle cases where a specific version of a state no longer becomes valid?
As a rule, you want to be wary, extremely wary, of introducing any automated validation that prevents you from loading an invalid history. Remember, state is just state; the business rules constrain the way the domain is allowed to change. Leaving broken states readable, but broken, is safe.
In particular, if you allow the state to load, it is a straight forward exercise to enumerate your event streams, test the final state of the object, and produce an exception report for any streams that produce an invalid state, escalating them to operators/management for handling, and so on.
Assuming that you are reasonable careful about input validation, and comparing whether your proposed command is consistent with latest known state (aggregates enforce business rules, but they don't need to hoard those rules for themselves), then you can probably achieve error rates low enough that you don't need aggressive data validation. That's especially true when the errors are easy to detect and cheap to fix.
Failing that, freezing any aggregates while they are in an invalid state is a good way to prevent further damage.
But if you really need the state to stay valid, there's a trick that you can play with compensating events.
Consider: the basic pattern of event sourcing looks something like
History history = repository.getHistoryById(id)
State current = State.SEED
for (Event e : history) {
current = current.apply(e)
}
There's actually a hidden concept here, which encapsulates the logic for processing the events prior to passing them to the state. Hidden, because the null case just passes the enumerated events straight through to the target.
History history = repository.getHistoryById(id)
Historian historian = new Historian();
State current = State.SEED
for (Event e : historian.reviewEvents(history)) {
current = current.apply(e)
}
The historian gives you a place to put your compensating event logic - based on its own state, the historian passes through most events, but fixes the ones that knows needs edits/compensation/redactions
Where does the historian state come from? Why, from the history of the historian, of course. You load the history of the event corrections, which will typically be short, into the historian, and then let the historian clean up the events for the aggregate.
And if you need corrections for the historian? It's turtles all the way down! Each stream has a unique historian; the identifier for the historian's stream is calculated from the stream it filters (named UUID's, for example, would allow you to do this). So for each stream, you check to see if a historian stream exists; when you find one that doesn't, you know to stop searching and use the null historian, roll up the changes, process the final sequence of events to regenerate the state of your real object, and off you go.
Mind you, I haven't seen a reference implementation of this idea anywhere; it's whiteboard sound, but the truth is I've been deferring this requirement in my own designs.

Related

Event Sourcing: multiple events vs a single "StatusChanged"

Assuming the common "Order" aggregate, my view of events is that each should be representative of the command that took place. E.g. OrderCreated, OrderePicked, OrderPacked, OrderShipped.
Applying these events in the aggregate changes the status of the order accordingly.
The problem:
I have a projector that lists all orders in the system and their statuses. So it consumes the events, and like with the aggregate "apply" method, it implements the logic that changes the status of the order.
So now the logic exists in two places, which is... not good.
A solution to this is to replace all the above events with a single StatusChanged event that contains a property with the new status.
Pros: both aggregate and projectors just need to handle one event type, and set the status to what's in that event. Zero logic.
Cons: the list of events is now very implicit. Instead of getting a list of WHAT HAPPENED (created, packed, shipped, etc.), we now have a list of the status changes events.
How do you prefer to approach this?
Note: this is not the full list of events. other events contain other properties, so clearly they don't belong to this problem. the problem is with events that don't contain any info, just change the status of an order.
In general it's better to have more finer-grained events, because this preserves context (and means that you don't have to write logic to reconstruct the context in your consumers).
You typically will have at most one projector which is duplicating your aggregate's event handler. If its purpose is actually to duplicate the aggregate's event handler (e.g. update a datastore which facilitates cross-aggregate querying), you may want to look at making that explicit as a means of making the code DRY (e.g. function-as-value, strategy pattern...).
For the other projectors you write (and there will be many as you go down the CQRS/ES road), you're going to be ignoring events that aren't interesting to that projection and/or doing radically different things in response to the events you don't ignore. If you go down the road of coarse events (CRUD being about the limit of coarseness: a StatusChanged event is just the "U" in CRUD), you're setting yourself up for either:
duplicating the aggregate's event handling/reconstruction in the projector
carrying oldState and newState in the event (viz. just saying StatusChanged { newState } isn't sufficient)
before you can determine what changed and the code for determining whether a change is interesting will probably be duplicated and more complex than the aggregate's event-handling code.
The coarser the events, the greater the likelihood of eventually having more duplication, less understandability, and worse performance (or higher infrastructure spend).
So now the logic exists in two places, which is... not good.
Not necessarily a problem. If the logic is static, then it really doesn't matter very much. If the logic is changing, but you can coordinate the change (ex: both places are part of the same deployment bundle), then its fine.
Sometimes this means introducing an extra layer of separation between your "projectors" and the consumers - ex: something that is tightly coupled to the aggregate watching the events, and copying status changes to some (logical) cache where other processes can read the information. Thus, you preserve the autonomy of your component without compromising your event stream.
Another possibility to consider is that we're allowed to produce more than one event from a command - so you could have both an OrderPicked event and a StatusChanged event, and then use your favorite filtering method for subscribers only interested in status changes.
In effect, we've got two different sets of information to track to remember later - inputs (information in the command, information copied from local caches), and also things we have calculated from those inputs, prior state, and the business policies that are now in effect.
So it may make sense to separate those expressions of information anyway.
If event sourcing is a good approach for the problems you are solving, then you are probably working on problems that are pretty important to the business, where specialization matters (otherwise, licensing an off the shelf product and creating adapters would be more cost effective). In which case, you should probably be expecting to invest in thinking deeply about the different trade offs you need to make, rather than hoping for a one-size-fits-all solution.

Where to apply business logic in EventSourcing

In eventsourcing, I am having bit confusion on where exactly have to apply Business logic? I have already searched in google, but all examples are very basic ie., Updating state of an object inside Handler from an event object, but in my other scenario, had some confusion didnt understood on where exactly have to apply Business logic.
For eg: lets take a scenario to update status of IntervieweeVO, which exists inside Interview aggregate class as below:
class Interview extends AggregateRoot {
private IntervieweeVO IntervieweeVO;
}
class IntervieweeVO {
int performance;
String status;
}
class IntervieweeSelectedEvent extends BaseEvent {
private IntervieweeVO IntervieweeVO;
}
I have a business logic, ie., if interviewee performance < 3, then status = REJECTED, otherwise status should be SELECTED.
So, my doubt is: where should I keep above business logic? Below are 3 scenarios:
1) Before Applying an Event: Do Business Logic, then apply(IntervieweeSelectedEvent) and then eventstore.save(intervieweeSelectedEvent)
2) Inside EventHandler: Apply Business logic inside EventHandler class, like handle(IntervieweeSelectedEvent intervieweeSelectedEvent) , check Business logic and then update Object state in ReadModel table.
3) Applying Business Logic in both places ie., Before Applying an event and also while handing the event (combining above 1 + 2)
Please clarify me on above.
The main issue with event sourcing is that it is hard to produce a viable example using synthetic scenarios.
But probably I could suggest something a little bit better than Interview. If you compare pre-computer era event sourced systems, you'll find that an event stream, which is the store of events composing the lifecycle of some entity, it rather a long-living thing. Events in an entity could span a few days (a list that tracks some document flow), a year (accounting period for some organisation) or tens of years (medical records for some person).
A single event stream usually represents a single entity - a legal process, a ledger or a person... Each event is a transactional (as in ACID) change to the state of the entity.
In your case such an entity could be, say, a position. Which is opened, announced, interviewee invited, invitation accepted, skills assessed, offer made, offer accepted, position closed. From the top of my head.
When an event is added to an entity, it means that the entity's state has changed. It is the new truth about the entity. You want to be careful about changing the truth. So, that's where business logic happens. You run some business logic to make up the decision whether to change the truth or not. It you decide to update the state of the truth - you save the event. That being said, "Interviewee rejected" is a valid event in this case.
Since an event is persisted, all the saved events of an entity are unconditionally the part of the truth about the entity, in their respective order. You then don't decide whether to "accept" or "reject" a persisted event - only how it would affect a projection.
You should be able to reconstruct the entity's state as of a specific point in time from the event stream.
This implies that applying events should NOT contain any logic other than state mapping logic. All state necessary to project the AR's state from the events must be explicitly defined in those events.
Events are an expressive way to define state changes, not operations/commands. For instance, if IntervieweeRejected means IntervieweeStatusChanged(rejected) then that meaning can't ever change. The IntervieweeRejected event can't ever imply anything else than status = rejected, unless there's some other state captured in the event's data (e.g. reason).
Obviously, the way the state is represented can always change, but the meaning must not. For example the AR may have started by only projecting the current status and later on projected the entire status history.
apply(IntervieweeRejected) => status = REJECTED //at first
apply(IntervieweeRejected) => statusHistory.add(REJECTED) //later
I have a business logic, ie., if interviewee performance < 3, then
status = REJECTED, otherwise status should be SELECTED.
Business logic would be placed in standard public AR methods. In this specific case you may expect interviewee.assessPerformance(POOR) to yield IntervieweePerformanceAssessed(POOR) and IntervieweeRejected events. Should you need to reevaluate that smart screening policy at a later time (e.g. if it has changed) then you could implement a reevaluateSmartScreeningPolicy operation.
Also, please note that such logic may not even belong in the Interviewee AR itself. The smart screening policy may be seen as something that happend after/in response to the IntervieweePerformanceAssessed event. Furthermore, I can easily see how a smart screening policy could become very complex, AI-driven which could justify it living in a dedicated Screening bounded context.
Your question actually made me think about how to effectively capture the context or why events occurred and I've asked about that here :)
you tagged your question cqrs but this is acutally the missing part in your example.
Eventsourcing is merely a way to look at the current state of an object. You either save that state as it appears now, or you source it from everything that happend. (eg Bank accounts current banalance as value or sum of all transactions)
So an event is a "fact" of something that happend. In your case that would be the interview with a certain score. And (dependent on your business logic) it COULD also state the status if the barrier is expected to change over time.
The crucial point is here that you should always adhere to the following chain:
"A command gets validated and if it passes it creates an unchangeable event that is persisted"
This means that in your case I would go for option 1. A SelectIntervieweeCommand should be validated and if everything is okay create an IntervieweeSelectedEvent which is an unchangeable fact. Thus the business logic wether the interviewee passed or not, must reside in the command handler function.

Is it ok to have FAT events with event sourcing?

I have recently been building an application on top of Greg Young EventStore as my peristance layer and I have been pondering how big should I allow an event to get?
For example I have an UK Address Aggregate with the following fields
UK_Address
-BuildingName
-Street
-Locality
-Town
-Postcode
Now I'm building the UI using React/Redux and was thinking should I create a single FAT addressUpdated Event contatining all the above fields?
Or should I Create a event for each of the different fields? and batch them within the client until the Save event is fired? buildingNameUpdated Event, streetUpdated Event, localityUpdated Event.
I'm not sure if the answer is as black and white ask I have asked it what I really would like to know is what conditions/constraints could you use to make the decision?
should I create a event for each of the different fields?
No. The representations of your events are part of the API -- so you want to use spellings that make sense at the level of the business, not at the level of the implementation.
Now I'm building the UI using React/Redux and was thinking should I create a single FAT updateAddress Event containing all the above fields?
You don't need to constrain the data that you send to your UI to match that which is in the persistence store. The UI is just a cached representation of a read model; there's no reason that representation needs to have the same form as what is in your event store.
Consider the React model itself -- your code makes changes to the "in memory" representation of your data, and then the library computes the new DOM and replaces it, which in turn causes the browser to update its view, which in turn causes the pixels on the screen to change.
So taking a fat event from the store, and breaking it into field level events for the UI is fine. Taking multiple events from the store and aggregating them into a single message for the UI is also fine. Taking events from the event store and transforming them into a spelling that the UI will recognize is also fine.
Do you have any comment regarding Arien answer regarding keeping fields that need to be consistent together? so regardless of when your snapshop the current state of the world it would be in a valid state?
I don't believe that this makes sense, and I'm not sure if it is possible in general.
It doesn't make sense, because "valid state" is a write model concern only; events are things that have happened, its too late to vote on whether they are valid or not. For instance, if you deploy a new model, with a new invariant, it still needs to respect the history of what happened before. So you can build a snapshot for that new model, but the snapshot may not be "valid". Too bad.
Given that, I don't think it makes sense to worry over whether each individual event in a commit leaves the snapshot in a valid state.
In particular, if a particular transaction involves multiple entities, it is very likely that the domain language will suggest an event for each entity (we "debit cash" and "credit accounts receivable"). The entities themselves, of course, are capable of changing independently of each other -- it's the aggregate that maintains the balance.
You have to bundle al the information together in one event when this data has to be consistent with each other.
So when you update one field of an address you probably get an unwanted address.
This will happen when the client has not processed all the events at a certain time due to eventual consistency.
Example:
Change address (City=1, Street=1, Housenumber=1) to (City=2, Street=2, Housenumber=2)
When you do this with 3 events and you have just processed one at the time of reading you could get the address: (City=2, Street=1, Housenumber=1).
If puzzled, give a try to a solution that is easier to implement. I guess "FAT" event will be easier: you will end up spending less time for implementing/debugging/supporting.
It is usually referred as YAGNI-KISS-Occam's Razor principles.
In theory and I find it to be a good rule of thumb is to have your commands and events reflecting the intent of the user staying true to DDD. You can find a good explanation of the pros and cons about event granularity here: https://medium.com/#hugo.oliveira.rocha/what-they-dont-tell-you-about-event-sourcing-6afc23c69e9a

How do I determine list of valid events for current state

When represeting a statemachine in UI it will be useful to know the list of valid events for the current state in order to disable/hide invalid options.
I would not want to replicate all statemachine rules.
We have had this issue gh-44 open a long time but I haven't found any reliable way to look into a future and predict what a machine would do.
Problem comes from a fact that with nested hierarchical states, same event can be handled by any state starting from active one into its parent structure. As transition(s) can have Guards which may evaluate conditions dynamically, even if we'd add some sort of a prediction engine, it would still be impossible to predict future as we don't have a time machine.

EventSourcing fix business logic bugs

I'm making some study of eventsourcing before applying it (or not).
Quick question : When using EventSourcing pattern we can imagine this scenario to handle an event :
command sent
command handler receive the previous command, validate it then
command handler persist this event and publish it
business model apply (business logic algorithm v1 for example) this event mutating its internal state
We can replay all the events and reconstruct the business object state.
How to handle business logic bugs (business logic algorithm v1 contains a nasty bugs).
I read we can fix the bug and replay the events and then we got the business model in a valid state once again.
But what happens if when fixing the business rule when applying event#1 would have caused the 'futurs' commands to fails ? In other words, the event#2, event#3, event#n was dependend of the state of the domain model after applying event#0. How can we fix the cascading events failure ?
I don't have a specific usecase : but we can imagine an account where balance is currently positive. Applying Event#0 increment the balance but this was a bug, the developer wanted to reduce the balance. Event#1 is a purchase that was valid because of the positive balance at this time.
The developer fixes the bug and replay the events. Event#0 decrease the balance which becomes negative. Event#1 is replayed : what happens ?
Do we need to handle this case with 'compensation' ? how ?
thanks in advance for your comments, external ressources that can be of any help (articles, blogs).
bye
Minor correction
When using EventSourcing pattern we can imagine this scenario to handle an event
command sent
command handler receive the previous command, validate it then
business model verifies that the command can be satisfied without violating the business invariant, and calculates the ensuing events
command handler persist this events and publish them
The command handler (specifically, the anti-corruption layer) is responsible for making sure that the command is well formed. The business model decides if the command is permitted by the business.
The good news: the events are just state changes; all of the rule validation is already done. When you fix the bug in the domain object so that it produces the correct events in response to the command, you aren't changing the way the event is applied.
And you certainly aren't changing the history -- if the ATM gave away $20 that it wasn't supposed to, you can't get the money back by editing the record.
What that means is that deploying the bug fix keeps the problem from getting worse; but it doesn't do anything for the event histories that are incorrect.
Compensating events are the right answer here. Ever have a grocery clerk double scan an item, and have to back one of them out? If you look closely, you'll see all three items
+1 candy bar
+1 candy bar
-1 candy bar
That's the idiom of the compensating event being appended to end of the stream.
So if the error showed first appeared in event #0, and then [event #1 .. event #99] have been played on top of that, the remedy for the error is to publish a compensating event #100.
Notice that this is exactly what book keepers would do. You put the wrong sign on the entry on line #1, add a bunch more entries, realize your mistake, and add a new entry that compensates for the earlier mistake.
More good news: in mature business processes, there are already mitigation procedures in place to handle various contingencies. So you can grab a meeting with your domain experts, and doodle on the whiteboard explaining the problem, and your experts should be able to show you the right way to compensate for it. Everything after that is feature management (does the mitigation need to be automated? Does the system need to do the mitigation automatically, or can it let human experts tell it what mitigation to apply, etc. etc.)

Resources