DDD - Update a small detail on an Entity without updating the whole Aggregate Root - client-server

Say my AggregateRoot is the Order-Model. There is a collection with OrderItems (Entities).
I have only one Repository for the AggregateRoot (Order) but not for OrderItems.
What should I do when the client wants to update only a small change like the Remarks-field on one OrderItem?
My current understanding ist that the client sends the update by a DTO. Then the middleware loads the whole Order, then update the single detail, and commit the whole Order to repository.
If I understood it correctly is that a good practice in real life or do you handle it differnently? It sounds not performant and not maintenance friendly to me.

As everything in DDD the answer lays in Domain rules. Everything has to gravitated arround rules not arround data structures.
Warning: Too simplistic example below!
You have to chage the Remarks field of one order item so make you a question: What restrictions and invariats has the Remarks field changing operation? Has OrderItem all info it needs for this? If yes then in this case OrderItem is your aggregate root.
Are some of Remarks not allowed into a OrderItem because it belongs to some type of Order but other Order types allows this Remarks. Then the Order is your aggregate root.
This gives you a clue about how you have to approach it BUT as your comment loading a Order with all OrderItems just to change one OrderItem Remark is absolutely not performant.
“I’m sorry that I coined the term ‘objects,’ because it gets many
people to focus on the lesser idea. The big idea is ‘messaging’” ~
Alan Kay
Remember that I said that DDD has to gravitated arround rules and not arround data structures?
So, do not think about data structures. Model everything arround the commands and the events (the messages) and the rules. Make your persistence repositories bring the appropiate Aggregate Root for that command, use the AR to apply the command, return a domain event whith the changes produced and use that event to persist the new system state and notify other services about the change.
Code example from Aggregate root invariant enforcement with application quotas
class ApplicationService{
public void registerUser(RegisterUserCommand registerUserCommand){
var user = new UserEntity(registerUserCommand.userData); //avoid wrong entity state; ctor. fails if some data is incorrect
RegistrationAggregate agg = aggregatesRepository.Handle(registerUserCommand); //handle is overloaded for every command we need. Use registerUserCommand.tenantId to bring total_active_users and quota from persistence, create RegistrarionAggregate fed with TenantData
var userRegisteredEvent = agg.registerUser(user); //return domain changes expressed as a event
persistence.Handle(userRegisteredEvent); //handle is overloaded for every event we need; open transaction, persist userRegisteredEvent.fromTenant.total_active_users where tenantId, optimistic concurrency could fail if total_active_users has changed since we read it (rollback transaction), persist userRegisteredEvent.user in relationship with tenantId, commit transaction
eventBus.publish(userRegisteredEvent); //notify external sources for eventual consistency
}
This allows you to bring a OrderItemRemarkManagerAggregate into memory from persistence that has just the info you need to chage the Remarks (i.e. OrderItem ID, current Remarks, OrderItem status, OrderType belonged, etc); just use it to apply the operation and apply the changes into persistence.
Later you could worry about reusing an aggregate for several operations (allways in the same Bounded Context of course) or even refactor as you need.

Related

Am I right in separating integration events from domain events?

I use event sourcing to store my object.
Changes are captured via domain events, holding only the minimal information required, e.g.
GroupRenamedDomainEvent
{
string Name;
}
GroupMemberAddedDomainEvent
{
int MemberId;
string Name;
}
However elsewhere in my application I want to be notified if a Group is updated in general. I don’t want to have to accumulate or respond to a bunch of more granular and less helpful domain events.
My ideal event to subscribe to is:
GroupUpdatedIntegrationEvent
{
int Id;
string Name;
List<Member> Members;
}
So what I have done is the following:
Update group aggregate.
Save down generated domain events.
Use these generated domain events to to see whether to trigger my integration event.
For the example above, this might look like:
var groupAggregate = _groupAggregateRepo.Load(id);
groupAggregate.Rename(“Test”);
groupAggregate.AddMember(1, “John”);
_groupAggregateRepo.Save(groupAggregate);
var domainEvents = groupAggregate.GetEvents();
if (domainEvents.Any())
{
_integrationEventPublisher.Publish(
new GroupUpdatedIntegrationEvent
{
Id = groupAggregateId,
Name = groupAggregate.Name,
Members = groupAggregate.Members
});
}
This means my integration events used throughout the application are not coupled to what data is used in my event sourcing domain events.
Is this a sensible idea? Has anyone got a better alternative? Am I misunderstanding these terms?
Of course you're free to create and publish as many events as you want, but I don't see (m)any benefits there.
You still have coupling: You just shifted the coupling from one Event to another. Here it really depends how many Event Consumers you've got. And if everything is running in-memory or stored in a DB. And if your Consumers need some kind of Replay mechanism.
Your integration Events can grow over time and use much bandwidth: If your Group contains 1000 Members and you add 5 new Members, you'll trigger 5 integration events that always contain all members, instead of just the small delta. It'll use much more network bandwidth and hard drive space (if persisted).
You're coupling your Integration Event to your Domain Model. I think this is not good at all. You won't be able to simply change the Member class in the future, because all Event Consumers depend on it. A solution could be to instead use a separate MemberDTO class for the Integration Event and write a MemberToMemberDTO converter.
Your Event Consumers can't decide which changes they want to handle, because they always just receive the full blown current state. The information what actually changed is lost.
The only real benefit I see is that you don't have to again write code to apply your Domain Events.
In general it looks a bit like Read Model in CQRS. Maybe that's what you're looking for?
But of course it depends. If your solution fits your application's needs, then it'll be fine to do it that way. Rules are made to show you the right direction, but they're also meant to be broken when they get in your way (and you know what you're doing).

Where to apply business logic in EventSourcing

In eventsourcing, I am having bit confusion on where exactly have to apply Business logic? I have already searched in google, but all examples are very basic ie., Updating state of an object inside Handler from an event object, but in my other scenario, had some confusion didnt understood on where exactly have to apply Business logic.
For eg: lets take a scenario to update status of IntervieweeVO, which exists inside Interview aggregate class as below:
class Interview extends AggregateRoot {
private IntervieweeVO IntervieweeVO;
}
class IntervieweeVO {
int performance;
String status;
}
class IntervieweeSelectedEvent extends BaseEvent {
private IntervieweeVO IntervieweeVO;
}
I have a business logic, ie., if interviewee performance < 3, then status = REJECTED, otherwise status should be SELECTED.
So, my doubt is: where should I keep above business logic? Below are 3 scenarios:
1) Before Applying an Event: Do Business Logic, then apply(IntervieweeSelectedEvent) and then eventstore.save(intervieweeSelectedEvent)
2) Inside EventHandler: Apply Business logic inside EventHandler class, like handle(IntervieweeSelectedEvent intervieweeSelectedEvent) , check Business logic and then update Object state in ReadModel table.
3) Applying Business Logic in both places ie., Before Applying an event and also while handing the event (combining above 1 + 2)
Please clarify me on above.
The main issue with event sourcing is that it is hard to produce a viable example using synthetic scenarios.
But probably I could suggest something a little bit better than Interview. If you compare pre-computer era event sourced systems, you'll find that an event stream, which is the store of events composing the lifecycle of some entity, it rather a long-living thing. Events in an entity could span a few days (a list that tracks some document flow), a year (accounting period for some organisation) or tens of years (medical records for some person).
A single event stream usually represents a single entity - a legal process, a ledger or a person... Each event is a transactional (as in ACID) change to the state of the entity.
In your case such an entity could be, say, a position. Which is opened, announced, interviewee invited, invitation accepted, skills assessed, offer made, offer accepted, position closed. From the top of my head.
When an event is added to an entity, it means that the entity's state has changed. It is the new truth about the entity. You want to be careful about changing the truth. So, that's where business logic happens. You run some business logic to make up the decision whether to change the truth or not. It you decide to update the state of the truth - you save the event. That being said, "Interviewee rejected" is a valid event in this case.
Since an event is persisted, all the saved events of an entity are unconditionally the part of the truth about the entity, in their respective order. You then don't decide whether to "accept" or "reject" a persisted event - only how it would affect a projection.
You should be able to reconstruct the entity's state as of a specific point in time from the event stream.
This implies that applying events should NOT contain any logic other than state mapping logic. All state necessary to project the AR's state from the events must be explicitly defined in those events.
Events are an expressive way to define state changes, not operations/commands. For instance, if IntervieweeRejected means IntervieweeStatusChanged(rejected) then that meaning can't ever change. The IntervieweeRejected event can't ever imply anything else than status = rejected, unless there's some other state captured in the event's data (e.g. reason).
Obviously, the way the state is represented can always change, but the meaning must not. For example the AR may have started by only projecting the current status and later on projected the entire status history.
apply(IntervieweeRejected) => status = REJECTED //at first
apply(IntervieweeRejected) => statusHistory.add(REJECTED) //later
I have a business logic, ie., if interviewee performance < 3, then
status = REJECTED, otherwise status should be SELECTED.
Business logic would be placed in standard public AR methods. In this specific case you may expect interviewee.assessPerformance(POOR) to yield IntervieweePerformanceAssessed(POOR) and IntervieweeRejected events. Should you need to reevaluate that smart screening policy at a later time (e.g. if it has changed) then you could implement a reevaluateSmartScreeningPolicy operation.
Also, please note that such logic may not even belong in the Interviewee AR itself. The smart screening policy may be seen as something that happend after/in response to the IntervieweePerformanceAssessed event. Furthermore, I can easily see how a smart screening policy could become very complex, AI-driven which could justify it living in a dedicated Screening bounded context.
Your question actually made me think about how to effectively capture the context or why events occurred and I've asked about that here :)
you tagged your question cqrs but this is acutally the missing part in your example.
Eventsourcing is merely a way to look at the current state of an object. You either save that state as it appears now, or you source it from everything that happend. (eg Bank accounts current banalance as value or sum of all transactions)
So an event is a "fact" of something that happend. In your case that would be the interview with a certain score. And (dependent on your business logic) it COULD also state the status if the barrier is expected to change over time.
The crucial point is here that you should always adhere to the following chain:
"A command gets validated and if it passes it creates an unchangeable event that is persisted"
This means that in your case I would go for option 1. A SelectIntervieweeCommand should be validated and if everything is okay create an IntervieweeSelectedEvent which is an unchangeable fact. Thus the business logic wether the interviewee passed or not, must reside in the command handler function.

EventSourcing and DDD Entity events

I have a DDD project using EventSourcing. And currently there are many aggregate roots many of which have collections of entities. Even more - some entities have collections of other entities.
Problem: Reading EventSourcing event log for audit purposes.
Question: What is the best way to save events in EventStore when an entity is Updated/Created/Removed having all these things in mind: they have to be easily readable, versions, may be not for this case but usually granular events are preferable, probably domain-events are going to be used for cross domain communication.
Should I save in the root stream the whole root with all collections of entities inside as a RootChangedEvent ?
Should I save only the entity which was Updated/Created/Removed in the root stream as a EntityChangedEvent/EntityCreatedEvent/EntityRemovedEvent
Should I save in the root stream Two events - one for the root - RootChangedEvent with only the version property + second for the entity which will have only a single property if such changed in EntityChangedEvent or whole entity if EntityCreatedEvent or only id if EntityRemovedEvent (How to handle if entity of entity created/updated/removed?)
Here is an example in my project:
The root - Pipeline.
public class Pipeline : AggregateRoot<IPipelineState>
It has collection of entities - public IList<Status> Statuses.
And each Status has collection of entities - public IList<Logic> Logics.
All collections could store a lot of entities. And right now I raise events like PipelineCreatedEvent, PipelineChangedEvent (not only when Pipeline changed but even when adding, updating, removing Status or Logic) and PipelineRemovedEvent.
There should be a single stream of events for any given aggregate, to avoid race conditions. Aggregate is a transaction boundary.
In your case, try to formulate what happened in your system not in terms of Entities, but in business words:
OrderCreated (orderId=123)
OrderItemAdded (orderId=123, 'product1')
OrderItemAdded (orderId=123, 'product2')
OrderItemRemoved (orderId=123, 'product1')
OrderPaid (orderId=123)
OrderArchived (orderId=123)
These events happened with what? With Order so order is your aggregate root, and 123 - its aggregateId. You may not even need OrderItems there, unless this is required by command handler (say, you don't want to emit OrderItemRemoved event for already removed item).
You will have a single event stream for aggregateRoot 123, and nobody can, say, add and OrderItem while you are processing PayOrder command.
It is important to understand that the more business specific your events, the more flexibility you'll have later with domain aggregates and read models. Remember, your events are immutable and will be there forever!
OrderEntityChangedEvent(new Status = Paid) implies particular structure of your entities
OrderPaid events assume nothing except there is an Order Aggregate root somewhere.

CQRS DDD: How to validate products existence before adding them to order?

CQRS states: command should not query read side.
Ok. Let's take following example:
The user needs to create orders with order lines, each order line contains product_id, price, quantity.
It sends requests to the server with order information and the list of order lines.
The server (command handler) should not trust the client and needs to validate if provided products (product_ids) exist (otherwise, there will be a lot of garbage).
Since command handler is not allowed to query read side, it should somehow validate this information on the write side.
What we have on the write side: Repositories. In terms of DDD, repositories operate only with Aggregate Roots, the repository can only GET BY ID, and SAVE.
In this case, the only option is to load all product aggregates, one by one (repository has only GET BY ID method).
Note: Event sourcing is used as a persistence, so it would be problematic and not efficient to load multiple aggregates at once to avoid multiple requests to the repository).
What is the best solution for this case?
P.S.: One solution is to redesign UI (more like task based UI), e.g.: User first creates order (with general info), then adds products one by one (each addition separate http request), but still I need to support bulk operations (api for third party applications as an example).
The short answer: pass a domain service (see Evans, chapter 5) to the aggregate along with the other command arguments.
CQRS states: command should not query read side.
That's not an absolute -- there are trade offs involved when you include a query in your command handler; that doesn't mean that you cannot do it.
In domain-driven-design, we have the concept of a domain service, which is a stateless mechanism by which the aggregate can learn information from data outside of its own consistency boundary.
So you can define a service that validates whether or not a product exists, and pass that service to the aggregate as an argument when you add the item. The work of computing whether the product exists would be abstracted behind the service interface.
But what you need to keep in mind is this: products, presumably, are defined outside of the order aggregate. That means that they can be changing concurrently with your check to verify the product_id. From the point of view of correctness, there's no real difference between checking the validity of the product_id in the aggregate, or in the application's command handler, or in the client code. In all three places, the product state that you are validating against can be stale.
Udi Dahan shared an interest observation years ago
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If the client has validated the data one hundred milliseconds ago when composing the command, and the data was valid them, what should the behavior of the aggregate be?
Think about a command to add a product that is composed concurrently with an order of that same product - should the correctness of the system, from a business perspective, depend on the order that those two commands happen to arrive?
Another thing to keep in mind is that, by introducing this check into your aggregate, you are coupling the ability to change the aggregate to the availability of the domain service. What is supposed to happen if the domain service can't reach the data it needs (because the read model is down, or whatever). Does it block? throw an exception? make a guess? Does this choice ripple back into the design of the aggregate, and so on.

How to persist unfinished Aggregate Root in invalid state?

I have an aggregate root that needs to be in valid state in order to be used in the system properly. However, the process of building the aggregate is long enough for users to be distracted. Sometimes, all user wants is to configure some part of this big aggregate and then save his work and go home, and tomorrow he will finish aggregate construction.
How can I do this? My PM enforced that we allow aggregates to have invalid state, and then we will check IsValid boolean right before we use it.
I personally went another path: I used Builder pattern for building my aggregate and now I'm planning to persist the builder itself as some intermediary state.
I have an aggregate root that needs to be in valid state in order to be used in the system properly. However, the process of building the aggregate is long enough for users to be distracted. Sometimes, all user wants is to configure some part of this big aggregate and then save his work and go home, and tomorrow he will finish aggregate construction.
How can I do this?
You have two aggregates -- one is the "configuration" that the users can edit at their own pace. The other is the live running instance built from a copy of the configuration, but only if that configuration satisfies the invariant of the running instance.
By the way, there can be a situation where the "running" aggregate
should be edited again (in fact, this is frequent). Then it can become
invalid again.
You have two obvious options here:
Model every undergoing creation/change process as a different aggregate (perhaps even in a different Bounded Context -- could be CRUD). That allows you to free your main aggregate from contextual validation.
Use the same aggregate instance, but in different states. For instance, you may have some fields as Optional<T> which could be empty when the aggregate is in the draft state, but can't when it is published. The state pattern could possibly be useful here.

Resources