Let's imagine that we want to create an application that implements Event sourcing. So we have the following aggregate root:
class Account{
val balance: BigDecimal = BigDecimal.ZERO
fun apply(event: EntryRegisteredEvent){
balance -= command.amount
}
fun registerNewDebitEntry(command: RegisterNewDebitEntryCommand): EntryRegisteredEvent {
if(balance < command.amount)
throw InsufficientBalanceException()
balance -= command.amount
return EntryRegisteredEvent.create(...)
}
}
My question is:
Why am I changing the balance state in registerNewDebitEntry() function? Shouldn't the method only checks for invariants and create an event? And then leaving only the apply() method responsible for changing the state of the aggregate root?
Does this separation make sense?
Edit 1:
I could apply the event instead of doing the change state logic again:
fun registerNewDebitEntry(command: RegisterNewDebitEntryCommand): EntryRegisteredEvent {
if(balance < command.amount)
throw InsufficientBalanceException()
val event = EntryRegisteredEvent.create(...)
apply(event)
return event
}
But the question remains. I really need to change the aggregate root at this moment?
Edit 2
I think the only reason to do this is in cases where you have to apply more than one command in the same process. But in cases where this is not true, I just don't see why we should update AR state
I'm only answering from a strictly DDD point of view.
One important point of DDD is that the domain entities are unaware of everything else in your application. It should not be dependent on infrastructure, commands, or event sourcing.
The reason is that the only complexity in it should be from the business rules and logic. Making sure that when it does something, it does it right.
That's hard if you start to mix in other things that it has to be responsible for (which also breaks SRP).
A domain entity with no dependencies is also so much easier to test.
In your case, I would have a Debit method that takes amount argument. I would put the rest of the logic in an application service.
As a direct answer regarding the event. That error check has to be in the command handler method, to prevent faulty events. Where the actual application (as in 'apply to the entity') is doesn't matter. And if you do as suggested above, it will mostly be a one liner so that no logic is duplicated.
Related
So I'm currently developing application using Spring boot with Axonframework, So in Axonframework there is something called aggregate. It can store some states and etc. Everything work fine, but there's a case where I have to check the same state to every incoming command before update the aggregate. Something like this
#CommandHandler
fun handle(command: UpdateProductCommand){
if (isProductApproved){
throw IllegalArgumentException("This product has not been approved by qa.")
}
... do something
}
#CommandHandler
fun handle(command: PublishProductCommand){
if (isProductApproved){
throw IllegalArgumentException("This product has not been approved by qa.")
}
... do something
}
... Some other commands
... Check the same state again and again to every command
As you can see, I have to check this isProductApproved to most of the commands. Is there anyway to easily apply this checking state to every functions or commands before start doing some logic. I would expect something like this
#Aggregate
#CheckState(value = isProductApproved)
class ProductAggregate {
... apply to every command
}
Or any better ways.
In Axon Framework applications it is possible to define a handler interceptor for a specific component containing the handlers (in your case an Aggregate). This can be achieved by adding a method handling the message, combined with the #MessageHandlerInterceptor annotation.
You can find more details on official Axon Reference guide (docs): https://docs.axoniq.io/reference-guide/axon-framework/messaging-concepts/message-intercepting#messagehandlerinterceptor
In my opinion this would be the most pragmatic way to do some logic that is general to a particular component, which is in your case an Aggregate (but it can be any other messaging component actually: event/query handlers, for example)
You mention that you have to add this check to most of the commands, not all of them. If indeed exceptions to the rule are possible, I think the following boring low-tech solution may be the best fit, even if it's probably not what you had in mind:
private fun requireProductIsApproved() = require(isApprovedProduct) {
"This product has not been approved by qa."
}
#CommandHandler
fun handle(command: PublishProductCommand){
requireProductIsApproved()
//... do something
}
Why?
it's dead simple
it's flexible: simply omit it in CommandHandlers that don't need the check.
it keeps your domain logic independent of Axon-specific mechanisms
I will explain with an example. My GWT project has a Company module, which lets a user add, edit, delete, select and list companies.
Of these, the add, edit and delete operations lands back the user on the CompanyList page.
Thus, having three different events - CompanyAddedEvent, CompanyUpdatedEvent and CompanyDeletedEvent, and their respective event handlers - seems overkill to me, as there is absolutely not difference in their function.
Is it OK to let a single event manage the three operations?
One alternative I think is to use some event like CompanyListInvokedEvent. However, somewhere I think its not appropriate, is the event actually is not the list being invoked, but a company being added/updated/deleted.
If it had been only a single module, I would have get the task done with three separate events. But other 10 such modules are facing this dilemma. It means 10x3 = 30 event classes along with their 30 respective handlers. The number is large enough for me to reconsider.
What would be a good solution to this?
UPDATE -
#ColinAlworth's answer made me realize that I could easily use Generics instead of my stupid solution. The following code represents an event EntityUpdatedEvent, which would be raised whenever an entity is updated.
Event handler class -
public class EntityUpdatedEvent<T> extends GwtEvent<EntityUpdatedEventHandler<T>>{
private Type<EntityUpdatedEventHandler<T>> type;
private final String statusMessage;
public EntityUpdatedEvent(Type<EntityUpdatedEventHandler<T>> type, String statusMessage) {
this.statusMessage = statusMessage;
this.type = type;
}
public String getStatusMessage() {
return this.statusMessage;
}
#Override
public com.google.gwt.event.shared.GwtEvent.Type<EntityUpdatedEventHandler<T>> getAssociatedType() {
return this.type;
}
#Override
protected void dispatch(EntityUpdatedEventHandler<T> handler) {
handler.onEventRaised(this);
}
}
Event handler interface -
public interface EntityUpdatedEventHandler<T> extends EventHandler {
void onEventRaised(EntityUpdatedEvent<T> event);
}
Adding the handler to event bus -
eventBus.addHandler(CompanyEventHandlerTypes.CompanyUpdated, new EntityUpdatedEventHandler<Company>() {
#Override
public void onEventRaised(EntityUpdatedEvent<Company> event) {
History.newItem(CompanyToken.CompanyList.name());
Presenter presenter = new CompanyListPresenter(serviceBundle, eventBus, new CompanyListView(), event.getStatusMessage());
presenter.go(container);
}
});
Likewise, I have two other Added and Deleted generic events, thus eliminating entire redundancy from my event-related codebase.
Are there any suggestions on this solution?
P.S. > This discussion provides more insight on this problem.
To answer this question, let me first pose another way of thinking about this same kind of problem - instead of events, we'll just use methods.
In my tiered application, two modules communicate via an interface (notice that these methods are all void, so they are rather like events - the caller doesn't expect an answer back):
package com.acme.project;
public interface CompanyServiceInteface {
public void addCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void updateCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void deleteCompany(CompanyDto company) throws AcmeBusinessLogicException;
}
This seems like overkill to me - why not just reduce the size of this API to one method, and add an enum argument to simplify this. This way, when I build an alternative implementation or need to mock this in my unit tests, I just have one method to build instead of three. This gets to be clearly overkill when I make the rest of my application - why not just ObjectServiceInterface.modify(Object someDto, OperationEnum invocation); to work for all 10 modules?
One answer is that you might want want to drastically modify the implementation of one but not the others - now that you've reduced this to just one method, all of this belongs inside that switch case. Another is that once simplified in this way, the inclination often to further simplify - perhaps to combine create and update into just one method. Once this is done, all callsites must make sure to fulfill all possible details of that method's contract instead of just the one specific one.
If the receivers of those events are simple and will remain so, there may be no good reason to not just have a single ModelModifiedEvent that clearly is generic enough for all possible use cases - perhaps just wrapping the ID to request that all client modules refresh their view of that object. If a future use case arises where only one kind of event is important, now the event must change, as must all sites that cause the event to be created so that they properly populate this new field.
Java shops typically don't use Java because it is the prettiest language, or because it is the easiest language to write or find developers for, but because it is relatively easy to maintain and refactor. When designing an API, it is important to consider future needs, but also to think about what it will take to modify the current API - your IDE almost certainly has a shortcut key to find all invocations of a particular method or constructor, allowing you to easily find all places where that is used and update them. So consider what other use cases you expect, and how easily the rest of the codebase can be udpated.
Finally, don't forget about generics - for my example above, I would probably make a DtoServiceInterface to simplify matters, so that I just declare the one interface with three methods, and implement it and refer to it as needed. In the same way, you can make one set of three GwtEvent types (with *Handler interfaces and possibly Has*Handlers as well), but keep them generic for all possible types. Consider com.google.gwt.event.logical.shared.SelectionEvent<T> as an example here - in your case you would probably want to make the model object type a parameter so that handlers can check which type of event they are dealing with (remember that generics are erased in Java), or source from one EventBus for each model type.
I'm writing a turn-based strategy game. Each player in the game has a team of units which can be individually controlled. On a user's turn, the game currently follows a pretty constant sequence of events:
Select a unit -> Move the selected unit -> Issue a command -> Confirm
I could implement this by creating a game class that keeps track of which of these stages the player is in and providing methods to move from one stage to the next, like this:
interface TeamCommander {
public void select(Coordinate where);
public void move(Coordinate to);
public void sendCommand(String command);
public void execute();
}
However, that would allow the possibility of a method being called at the wrong time (for example, calling move() before calling select()), and I would like to avoid that. So I currently have it implemented statelessly, like this:
interface UnitSelector {
public UnitMover select(Coordinate where);
}
interface UnitMover {
public UnitCommander move(Coordinate to);
}
interface UnitCommander {
public CommandExecutor sendCommand(String command);
}
interface CommandExecutor {
public void execute();
}
However, I'm having difficulty presenting this information to the user. Since this is stateless, the game model does not store any information about what the user is currently doing, and thus the view can't query the model about it. I could store some state in the GUI, but that would be bad form. So, my question is: does anyone have an idea about how to resolve this?
First, there's something I'm not getting here: You have to be storing persistent state somewhere, even if it is only in the View / GUI. Without persistent state you cannot have a game. I'm guessing you're using either ASP or PHP; if so, use sessions to track state.
Secondly, build your state logic into that so it is known where in the input sequence you are for each player / each unit in that player's team. Don't try to get fancy with it. B requires A, C requires B and so on. While you're writing it, just give yourself a scaffold by throwing exceptions if the call order comes up incorrect (which you should be checking on every user input as I assume this is an event driven rather than loop-driven game), and debug it from there.
As an aside: I get suspicious when I see interfaces with a single method as in your second example above. An interface typically informs of there being a unique SET of functionalities which different classes each fulfill -- unless you are trying to construct multiple different classes which use slightly different sets of individual method signatures, don't do what you're doing there. It is all fine and good to say "code to an interface and not an implementation", but you need to first take the top down approach, saying, "How does my ultimate client code (in your root game logic class or method) need to call for such-and-such to occur?" and keep asking that question up the call stack (i.e. at each subsequent sub-call codepoint). If you try to build it from the bottom up, you will end up with the confusing and unnecessarily complicated code I see there. The only other exception to this which I see on a regular basis is the command pattern, and that is generally intended to look like
void execute();
or
void execute(Object data);
...But typically not a whole slew of slightly different method signatures (again possible, but unlikely). My gut feeling comes from my experience with such constructs in that they usually don't make sense and you end up completely refactoring code that uses them.
Where is the proper place to perform validation given the following scenario/code below:
In MethodA only: since this is the public method which is meant to be used by external assemblies?
In MethodA and B since both these can be accessed outside the class?
Or Methods A, B and C since method C may be used by another internal method (but it might not efficient since the programmer can see the code for MethodC already and therefore should be able to know the valid parameters to pass)?
Thanks for any input.
public class A
{
public void MethodA(param)
{
MethodB(param);
}
internal void MethodB(param)
{
MethodC(param);
}
private void MethodC(param)
{
}
}
Parameter validation should always be performed regardless of the caller's location (inside or outside of the assembly). Defensive programming, one can say.
MethodC; that way the parameter always gets checked, even if someone comes along later and adds a call to MethodC from within class A, or they make MethodC public. Any exception should be bubbled up to where it can be best dealt with.
There isn't a 'proper' place, except to adhere to DRY principles and avoid copying the validation code to several places. I'd normally suggest that you delay validation to the latest possible stage, as then if the parameter is never used you don't need to spend time validating it though. This also gives the validation some locality to the place it is used, and you never need to think 'oh, has this parameter been validated yet?' as the validation is right there.
Given that a more likely senario would involve each method having different parameters and also probably some
if (P1 == 1) { MethodA(P2) } else { MethodB(P2) }
type logic in hte longer term it makes more sense to validate each parameter at the point of entry, escpecially as you may want different error handling depending on where hte method was called.
If the validation logic for a given parameter start to get complex ( i.e. more than five lines of code) then consider a private method to validate that parameter.
I have a repository data access pattern like so:
IRepository<T>
{
.. query members ..
void Add(T item);
void Remove(T item);
void SaveChanges();
}
Imagine the scenario where I have a repository of users, users have a user name which is unique, if I create a new user with a username that exists (imagine I have a dumb ui layer that doesn't check), when I add it to the repository, all is fine.. when I hit SaveChanges, my repository attempts to save the item to the database, my database is enforcing these rules luckily and throws me back an aborted exception due to a unique key violation.
It seems to me that, generally this validation is done at the layer ABOVE the repository, the layers that call it know they should ensure this rule, and will pre-check and execute (hopefully in some kind of transaction scope to avoid races, but doesn't always seem possible with the medium ignorance that exists).
Shouldn't my repository be enforcing these rules? what happens if my medium is dumb, such as a flat database without any integrity checks?
And if the repository is validating these kind of things, how would they inform callers about the violation in a way that the caller can accurately identify what went wrong, exceptions seem like a poor way to handle this because their relatively expensive and are hard to specialize down to a specific violation..
I've been playing around with a 'Can' pattern, for example.. CanAdd with Add, add will call CanAdd and throw an invalid operation exception if can returns a violation.. CanAdd also returns a list of violations about what went wrong.. this way I can start to stack these routines through the stack.. for example, the service layer above would also have a 'Can' method that would return the repositories report + any additional violations it wanted to check (such as more complicated business rules, such as which users can invoke specific actions).
Validation of data is such a fundamental yet I feel there is no real guidance for how to handle more advanced validation requirements reliably.
Edit, additionally in this case, how do you handle validation of entities that are in the repository and are updated via change tracking.. for example:
using (var repo = ...)
{
var user = repo.GetUser('user_b');
user.Username = 'user_a';
repo.SaveChanges(); // boom!
}
As you could imagine, this will cause an exception.. going deeper down the rabbit hole, imagine I've got a validation system in place when I add the user, and I do something like this:
using (var repo = ...)
{
var newUser = new User('user_c');
repo.Add(newUser); // so far so good.
var otherUser = repo.GetUser('user_b');
otherUser.Username = 'user_c';
repo.SaveChanges(); // boom!
}
In this case, validating when adding the user was pointless, as 'downstream' actions could screw us up anyway, the add validation rule would need to check the actual persistence storage AND any items queued up to persist.
This still doesn't stop the previous change tracking problem.. so now do I start to validate the save changes call? it seems like there would be a huge amount of violations that could happen from aparently unrelated actions.
Perhaps I'm asking for an unrealistic, perfect safety net?
Thanks in advance,
Stephen.
The ideal rule is that each of your layers should be a black box and none of them should depend on validation of another layer. The reason behind this is that the DB has no idea of the UI and vice versa. So when the DB throws an exception, the UI must have DB knowledge (bad thing) to convert that into something the UI layer can understand, so it can eventually convert it into something the user can understand. Ugh.
Unfortunately, making validation on every layer is also hard. My solution: Either put the validation in a single place (maybe the business layer) and make the other layers really dumb. They don't check anything elsewhere.
Or write your validation in an abstract way into the model and then generate all validation from that. For example:
String name;
Description nameDesc = new Description("name",
new MaxLength(20), new NotNull());
This way, you can write code which examines the Description stuff (generate code or even at runtime) and do the validation in each layer with little cost because one change fixes all layers.
[EDIT] For validation, you only have these cases:
Duplicate key
Above some limit
Below some limit
Null (not specified)
Empty
Formatting error (date fields, etc)
So you should be able to get away with these exception classes which have object, field, old&new value plus special info like the limit that was hit. So I'm wondering where your many exception classes come from.
For your other question, this is ... uh ... "solved" by the two phase commit protocol. I say "solved", because there are situations when the protocol breaks down and in my experience, it's much better to give the user a "Retry?" dialog or some other means to fix the problem rather than investing a lot of time into TPC.