It is a good practice to load as less data as possible from the database through business operations. And sometimes using directly the entity(model) object as the command object is not secure. So what should be the approach on selecting the command object? Using a separate command object for each view does not make sense.
Use cases or a nice resource is appreciated. Thanks.
It's true you may have some security concerns when using domain objects as command objects, Spring tries to bind every single parameter to the command fields, so a user could add extra parameters in the request to modify fields that were not supposed to be bound. In case you go for this approach, make sure to define either a white list or a black list of parameters to be bound:
#InitBinder
public void initBinder(WebDataBinder binder) {
binder.setAllowedFields("firstName", "lastName");
}
or
#InitBinder
public void initBinder(WebDataBinder binder) {
binder.setDisallowedFields("id", "creationDate");
}
The alternative is to create an extra class for the form. This class can adapt better to the UI needs if your domain objects don't match what you need in the view layer. This class can encapsulate any web logic, validation and the logic to copy to/from a domain object.
So I would say, going for domain objects is fine as long as you set a white / black list and you don't modify it because of UI needs (adding extra fields or extra logic in the domain object), otherwise you should create an additional command object.
Related
I was wondering where exactly we should put input validations(imagine an API call send input to apply free times of a user). Is it right to inject validation class in service Layer and call validate method inside service? or it's better to put it in the infrastructure layer or even in Domain model? I just wanted to see a sample code that's implement validation of input for an API in Domain-driven design approach? what if I use CQRS architecture?
I use in my DDD/CQRS project following approach, structure of a project is API layer, Domain layer, Data Access layer, all input data from UI or from User are validated before, command are created and dispatched, to update the state of Domain, and we validate input data two times one is on the UI, (Angular app), and second one in Web API layer, if the data are valid the CQRS command are created and dispatched after that you can have Business logic validation. For validation you can use FastValidator or FluentValidation
UPDATE: Here is the simple example we have API for Create Batch Entity.
[HttpPost]
[Route("create")]
public IHttpActionResult Create([FromBody] BatchEditModel model)
{
var createCommand = model.Map<BatchEditModel, CreateBatchCommand>();
var result = (OperationResult<int>) _commandDispatcher.Dispatch(createCommand);
return Result(result);
}
As you can see as user input data will be BatchEditModel.
so we have BatchEditModelValidator which contains input data validation:
public class BatchEditModelValidator : AbstractValidator<BatchEditModel>
{
public BatchEditModelValidator()
{
RuleFor(x => x.Number).NotEmpty()
.WithMessage(ValidatorMessages.MustBeSpecified);
RuleFor(x => x.ClientId).GreaterThan(0)
.WithMessage(ValidatorMessages.MustBeSpecified);
RuleFor(x => x.EntryAssigneeId).GreaterThan(0)
.WithMessage(ValidatorMessages.MustBeSpecified);
RuleFor(x => x.ReviewAssigneeId).GreaterThan(0)
.WithMessage(ValidatorMessages.MustBeSpecified);
RuleFor(x => x.Description).NotEmpty()
.WithMessage(ValidatorMessages.MustBeSpecified);
}
}
this Validator will be executed before BatchEditModel will be mapped to CreateBatchCommand
and in CreateBatchCommandHandler we have Business logic validation CheckUniqueNumber
public OperationResult Handle(CreateBatchCommand command)
{
var result = new OperationResult<int>();
if (CheckUniqueNumber(result, command.ClientId, command.Number))
{
if (result.IsValid)
{
var batch = _batchFactory.Create(command);
_batchRepository.Add(batch);
_batchRepository.Save();
result.Value = batch.Id;
}
}
return result;
}
My approach is putting validation in the domain model, I validate the functionality of aggregates, entities, value objects, etc.
Then you can validate application services too, and user interface too. But those validations are a plus, a validation enhancement from the user point of view, as validation is faster.
Why this duplication of validations at different layers? Well, because if you just rely on UI or application service validations, it maybe possible that if they don't work well for whatever reason, and you don't validate the domain model, you are executing domain functionality without validating it.
Also, I would point out that not all the validations can be done at UI or at application layer, because you may have to access domain.
Finally, doing CQRS or not is independent on where you decide to put the validations. It's just that if you do CQRS, then validations at application layer are easier to do, as you can put them in decorators that wrap commands and queries.
Hope my explanation helps.
where should put input validation [in Domain Driven Design]?
This is largely unrelated to DDD, but: the closest possible to the input source.
You aren't going to wait until invalid data has crossed 4 layers to discard it.
Input validation precisely means you don't need anything else (e.g. loading other data) to check it, so you might as well do it as soon as you can. Of course, caveats apply, like any validation that can be circumvented must be double checked - client side javascript for instance.
what if I use CQRS architecture?
I wouldn't expect CQRS to change things very much.
Usually, by the time you are invoking a method in a domain entity, your inputs should have already been converted from their domain agnostic form into value objects.
Value objects are expected to be constructed in a valid state, and often include a check of a constraint within the constructor/factory method that produces it. However, in Java and similar languages, the implementation of the constructor usually throws (because constructors don't have any other way of reporting a problem).
Often what clients want instead is a clear understanding of all of the constraints violated by the input data, rather than just the first one. So you may need to pull the constraints out as first class citizens in the model, as predicates that can be checked.
You should validate in your app service before attempting to modify your domain. Validation should be towards the edges of your app (but not in the UI) so invalid or incomplete requests aren't even getting into your domain model.
I consider it two levels of validation because you will validate the request before attempting some behavior on the model then the model should again verify for internal consistency, since it can never be persisted in an invalid state.
In this great book about Domain-Driven Design, a chapter is dedicated to the user interface and its relationship to domain objects.
One point that confuses me is the comparison between Use case optimal queries and presenters.
The excerpt dealing with optimal queries (page 517) is:
Rather than reading multiple whole Aggregate instances of various
types and then programmatically composing them into a single container
(DTO or DPO), you might instead use what is called a use case optimal
query.
This is where you design your Repository with finder query
methods that compose a custom object as a superset of one or more
Aggregate instances.
The query dynamically places the results into a
Value Object (6) specifically designed to address the needs of the use
case.
You design a Value Object, not a DTO, because the query is
domain specific, not application specific (as are DTOs). The custom
use case optimal Value Object is then consumed directly by the view
renderer.
Thus, the benefit of optimal queries is to directly provide a specific-to-view value object, acting as the real view model.
A page later, presenter pattern is described:
The presentation model acts as an Adapter. It masks the details of the
domain model by providing properties and behaviours that are designed
in terms of the needs of the view.
Rather than requiring the
domain model to specifically support the necessary view properties, it
is the responsibility of the Presentation Model to derive the
view-specific indicators and properties from the state of the domain
model.
It sounds that both ways achieve the construction of a view model, specific to the use case.
Currently my call chain (using Play Framework) looks like:
For queries: Controllers (acting as Rest interface sending Json) -> Queries (returning specific value object through optimal queries)
For commands: Controllers (acting as Rest interface sending Json) -> Application services (Commands) -> domain services/repositories/Aggregates (application services returns void)
My question is: if I already practice the use case optimal query, what would be the benefit of implementing the presenter pattern? Why bother with a presenter if one could always use optimal queries to satisfy the client needs directly?
I just think of one benefit of the presenter pattern: dealing with commands, not queries, thus providing to command some domain objects corresponding to the view models determined by the presenter. Controller would then be decoupled from domain object.
Indeed, another excerpt of Presenter description is:
Additionally, edits performed by the user are tracked by the
Presentation Model.
This is not the case of placing overloaded
responsibilities on the Presentation Model, since it's meant to adapt
in both directions, model to view and view to model.
However, I prefer sending pure primitives to application services (commands), rather than dealing directly with domain object, so this benefit would not apply for me.
Any explanation?
Just a guess :)
The preseneter pattern could reuse your repository's aggregate finder methods as much as possible. For example, we have two views, in this case we need two adapters(an adapter per view), but we only need one repository find method:
class CommentBriefViewAdapter {
private Comment comment;
public String getTitle() {
return partOf(comment.getTitle());
//return first 10 characters of the title, hide the rest
}
.....//other fields to display
}
class CommentDetailViewAdapter {
private Comment comment;
public String getTitle() {
return comment.getTitle();//return full title
}
.....//other fields to display
}
//In controller:
model.addAttribute(new CommentBriefViewAdapter(commentRepo.findBy(commentId)));
// same repo method
model.addAttribute(new CommentDetailViewAdapter(commentRepo.findBy(commentId)));
But optimal queries is view oriented(a query per view). I think these two solutions are designed for none-cqrs style ddd architecture. They're no longer needed in a cqrs-style arichitecture since queries are not based on repository but specific thin data layer.
I'm working on a ASP.NET MVC system where you may click on a ajax link that will open a window (kendo window but it does not affect the situation) which a complex flow. To make this less of a nightmare to manage, I made a ViewModel (as I should) but this ViewModel is a complex object due to the complexity of the procedure.
There is anywhere from a single to 5 windows that asks various questions depending on a lot of conditions (including, but not limited to, what time you click the link, who you are, what schedule is attached to your account and, obviously, your previous answers in this flow).
The problem is that having a complex object, I cannot simply make #Html.HiddenFor(o=>o.XXX). So I proceeded to find an alternative and it led me with a single option, TempData. I'm really not a fan of dynamics and object types. I'd really like to have this View Model strongly typed.
What would be the best way to approach this?
Here is a case where using Session or TempData might make sense. Contrary to popular belief, you can make these somewhat strongly-typed. Not like a viewmodel, but you can avoid keychain messes by using extension methods.
For example, instead of doing something like this:
TempData["NestedVariable1"] = someObject;
...
var someObject = TempData["NestedVariable1"] as CustomType;
You can write extension methods to store these variables, and encapsulate the keys and casting in the extension methods.
public static class ComplexFlowExtensions
{
private static string Nv1Key = "temp_data_key";
public static void NestedVariable1(this TempData tempData, CustomType value)
{
// write the value to temp data
tempData[Nv1Key] = value;
}
public static CustomType NestedVariable1(this TempData tempData)
{
// read the value from temp data
return tempData[Nv1Key] as CustomType;
}
}
You can then read / write these values from either controllers or views like this:
TempData.NestedVariable1(someObject);
...
var someObject = TempData.NestedVariable1();
You could use the same pattern with Session as well. And instead of saving each individual scalar value in a separate variable, you should be able to store an entire nested object graph in the variable. Either that, or serialize it to JSON and store that, then deserialize when you get it back out. Either way, I think this beats a ton of hidden fields written out to your view's form.
I will explain with an example. My GWT project has a Company module, which lets a user add, edit, delete, select and list companies.
Of these, the add, edit and delete operations lands back the user on the CompanyList page.
Thus, having three different events - CompanyAddedEvent, CompanyUpdatedEvent and CompanyDeletedEvent, and their respective event handlers - seems overkill to me, as there is absolutely not difference in their function.
Is it OK to let a single event manage the three operations?
One alternative I think is to use some event like CompanyListInvokedEvent. However, somewhere I think its not appropriate, is the event actually is not the list being invoked, but a company being added/updated/deleted.
If it had been only a single module, I would have get the task done with three separate events. But other 10 such modules are facing this dilemma. It means 10x3 = 30 event classes along with their 30 respective handlers. The number is large enough for me to reconsider.
What would be a good solution to this?
UPDATE -
#ColinAlworth's answer made me realize that I could easily use Generics instead of my stupid solution. The following code represents an event EntityUpdatedEvent, which would be raised whenever an entity is updated.
Event handler class -
public class EntityUpdatedEvent<T> extends GwtEvent<EntityUpdatedEventHandler<T>>{
private Type<EntityUpdatedEventHandler<T>> type;
private final String statusMessage;
public EntityUpdatedEvent(Type<EntityUpdatedEventHandler<T>> type, String statusMessage) {
this.statusMessage = statusMessage;
this.type = type;
}
public String getStatusMessage() {
return this.statusMessage;
}
#Override
public com.google.gwt.event.shared.GwtEvent.Type<EntityUpdatedEventHandler<T>> getAssociatedType() {
return this.type;
}
#Override
protected void dispatch(EntityUpdatedEventHandler<T> handler) {
handler.onEventRaised(this);
}
}
Event handler interface -
public interface EntityUpdatedEventHandler<T> extends EventHandler {
void onEventRaised(EntityUpdatedEvent<T> event);
}
Adding the handler to event bus -
eventBus.addHandler(CompanyEventHandlerTypes.CompanyUpdated, new EntityUpdatedEventHandler<Company>() {
#Override
public void onEventRaised(EntityUpdatedEvent<Company> event) {
History.newItem(CompanyToken.CompanyList.name());
Presenter presenter = new CompanyListPresenter(serviceBundle, eventBus, new CompanyListView(), event.getStatusMessage());
presenter.go(container);
}
});
Likewise, I have two other Added and Deleted generic events, thus eliminating entire redundancy from my event-related codebase.
Are there any suggestions on this solution?
P.S. > This discussion provides more insight on this problem.
To answer this question, let me first pose another way of thinking about this same kind of problem - instead of events, we'll just use methods.
In my tiered application, two modules communicate via an interface (notice that these methods are all void, so they are rather like events - the caller doesn't expect an answer back):
package com.acme.project;
public interface CompanyServiceInteface {
public void addCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void updateCompany(CompanyDto company) throws AcmeBusinessLogicException;
public void deleteCompany(CompanyDto company) throws AcmeBusinessLogicException;
}
This seems like overkill to me - why not just reduce the size of this API to one method, and add an enum argument to simplify this. This way, when I build an alternative implementation or need to mock this in my unit tests, I just have one method to build instead of three. This gets to be clearly overkill when I make the rest of my application - why not just ObjectServiceInterface.modify(Object someDto, OperationEnum invocation); to work for all 10 modules?
One answer is that you might want want to drastically modify the implementation of one but not the others - now that you've reduced this to just one method, all of this belongs inside that switch case. Another is that once simplified in this way, the inclination often to further simplify - perhaps to combine create and update into just one method. Once this is done, all callsites must make sure to fulfill all possible details of that method's contract instead of just the one specific one.
If the receivers of those events are simple and will remain so, there may be no good reason to not just have a single ModelModifiedEvent that clearly is generic enough for all possible use cases - perhaps just wrapping the ID to request that all client modules refresh their view of that object. If a future use case arises where only one kind of event is important, now the event must change, as must all sites that cause the event to be created so that they properly populate this new field.
Java shops typically don't use Java because it is the prettiest language, or because it is the easiest language to write or find developers for, but because it is relatively easy to maintain and refactor. When designing an API, it is important to consider future needs, but also to think about what it will take to modify the current API - your IDE almost certainly has a shortcut key to find all invocations of a particular method or constructor, allowing you to easily find all places where that is used and update them. So consider what other use cases you expect, and how easily the rest of the codebase can be udpated.
Finally, don't forget about generics - for my example above, I would probably make a DtoServiceInterface to simplify matters, so that I just declare the one interface with three methods, and implement it and refer to it as needed. In the same way, you can make one set of three GwtEvent types (with *Handler interfaces and possibly Has*Handlers as well), but keep them generic for all possible types. Consider com.google.gwt.event.logical.shared.SelectionEvent<T> as an example here - in your case you would probably want to make the model object type a parameter so that handlers can check which type of event they are dealing with (remember that generics are erased in Java), or source from one EventBus for each model type.
I'd assume that since the query language sits within the controller (typically) that it belongs to that component, but if I play devil's advocate I'd argue that the query language is execute within the domain of the model, and is tightly coupled to that component so it might also be a part of it.
Anyone know the answer? Is there a straight answer or is it technology specific?
Both are legitimate ways to implement it. The question is what and how you need to expose your application to its users. In Patterns Of Enterprise Application Architecture (figured again, that I really like to quote that book ;) they offer two ways to implement the Domain Model:
Implement all application logic (and therefore the query language) in the Model. That makes your domain model very use case specific (you can't reuse it that easily since it already has some application logic and dependency to the backend storage being used) but it might be more appropriate for complex domain logic.
Put the application logic in another layer (which can be the Service Layer being used by the controller in the MVC pattern or the controller directly). That usually makes your model objects plain data containers and you don't need to expose the whole (probably complex) domain model structure to your users (you can make the interface as simple as possible).
As a code example:
// This can also be your controller
class UserService
{
void save(User user)
{
user.password = md5(user.password)
// Do the save query like "INSER INTO users"... or some ORM stuff
}
}
class User
{
String username;
String password;
// The following methods might be added if you put your application logic in the domain model
void setPassword(String password)
{
// Creates a dependency to the hashing algorithm used
this.password = md5(password)
}
void save()
{
// This generates a dependency to your backend even though
// the user object doesn't need to know to which backend it gets saved
// INSET INTO users
}
}