Domain driven design is confusing - asp.net-mvc-3

1) What are the BLL-services? What's the difference between them and Service Layer services? What goes to domain services and what goes to service layer?
2) Howcome I refactor BBL model to give it a behavior: Post entity holds a collection of feedbacks which already makes it possible to add another Feedback thru feedbacks.Add(feedback). Obviosly there are no calculations in a plain blog application. Should I define a method to add a Feedback inside Post entity? Or should that behavior be mantained by a corresponing service?
3) Should I use Unit-Of-Work (and UnitOfWork-Repositories) pattern like it's described in http://www.amazon.com/Professional-ASP-NET-Design-Patterns-Millett/dp/0470292784 or it would be enough to use NHibernate ISession?

1) Business Layer and Service Layer are actually synonyms. The 'official' DDD term is an Application Layer.
The role of an Application Layer is to coordinate work between Domain Services and the Domain Model. This could mean for example that an Application function first loads an entity trough a Repository and then calls a method on the entity that will do the actual work.
2) Sometimes when your application is mostly data-driven, building a full featured Domain Model can seem like overkill. However, in my opinion, when you get used to a Domain Model it's the only way you want to go.
In the Post and Feedback case, you want an AddFeedback(Feedback) method from the beginning because it leads to less coupling (you don't have to know if the FeedBack items are stored in a List or in a Hashtable for example) and it will offer you a nice extension point. What if you ever want to add a check that no more then 10 Feedback items are allowed. If you have an AddFeedback method, you can easily add the check in one single point.
3) The UnitOfWork and Repository pattern are a fundamental part of DDD. I'm no NHibernate expert but it's always a good idea to hide infrastructure specific details behind an interface. This will reduce coupling and improves testability.

I suggest you first read the DDD book or its short version to get a basic comprehension of the building blocks of DDD. There's no such thing as a BLL-Service or a Service layer Service. In DDD you've got
the Domain layer (the heart of your software where the domain objects reside)
the Application layer (orchestrates your application)
the Infrastructure layer (for persistence, message sending...)
the Presentation layer.
There can be Services in all these layers. A Service is just there to provide behaviour to a number of other objects, it has no state. For instance, a Domain layer Service is where you'd put cohesive business behaviour that does not belong in any particular domain entity and/or is required by many other objects. The inputs and ouputs of the operations it provides would typically be domain objects.
Anyway, whenever an operation seems to fit perfectly into an entity from a domain perspective (such as adding feedback to a post, which translates into Post.AddFeedback() or Post.Feedbacks.Add()), I always go for that rather than adding a Service that would only scatter the behaviour in different places and gradually lead to an anemic domain model. There can be exceptions, like when adding feedback to a post requires making connections between many different objects, but that is obviously not the case here.

You don't need a unit-of-work pattern on top on the NHibernate session:
Why would I use the Unit of Work pattern on top of an NHibernate session?
Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

Related

Is it problematic that Spring Data REST exposes entities via REST resources without using DTOs?

In my limited experience, I've been told repeatedly that you should not pass around entities to the front end or via rest, but instead to use a DTO.
Doesn't Spring Data Rest do exactly this? I've looked briefly into projections, but those seem to just limit the data that is being returned, and still expecting an entity as a parameter to a post method to save to the database. Am I missing something here, or am I (and my coworkers) incorrect in that you should never pass around and entity?
tl;dr
No. DTOs are just one means to decouple the server side domain model from the representation exposed in HTTP resources. You can also use other means of decoupling, which is what Spring Data REST does.
Details
Yes, Spring Data REST inspects the domain model you have on the server side to reason about the way the representations for the resources it exposes will look like. However it applies a couple of crucial concepts that mitigate the problems a naive exposure of domain objects would bring.
Spring Data REST looks for aggregates and by default shapes the representations accordingly.
The fundamental problem with the naive "I throw my domain objects in front of Jackson" is that from the plain entity model, it's very hard to reason about reasonable representation boundaries. Especially entity models derived from database tables have the habit to connect virtually everything to everything. This stems from the fact that important domain concepts like aggregates are simply not present in most persistence technologies (read: especially in relational databases).
However, I'd argue that in this case the "Don't expose your domain model" is more acting on the symptoms of that than the core of the problem. If you design your domain model properly there's a huge overlap between what's beneficial in the domain model and what a good representation looks like to effectively drive that model through state changes. A couple of simple rules:
For every relationship to another entity, ask yourself: couldn't this rather be an id reference. By using an object reference you pull a lot of semantics of the other side of the relationship into your entity. Getting this wrong usually leads entities referring to entities referring to entities, which is a problem on a deeper level. On the representation level this allows you to cut off data, cater consistency scopes etc.
Avoid bi-directional relationships as they're notoriously hard to get right on the update side of things.
Spring Data REST does quite a few things to actually transfer those entity relationships into the proper mechanisms on the HTTP level: links in general and more importantly links to dedicated resources managing those relationships. It does so by inspecting the repositories declared for entities and basically replaces an otherwise necessary inlining of the related entity with a link to an association resource that allows you to manage that relationship explicitly.
That approach usually plays nicely with the consistency guarantees described by DDD aggregates on the HTTP level. PUT requests don't span multiple aggregates by default, which is a good thing as it implies a scope of consistency of the resource matching the concepts of your domain.
There's no point in forcing users into DTOs if that DTO just duplicates the fields of the domain object.
You can introduce as many DTOs for your domain objects as you like. In most of the cases, the fields captured in the domain object will reflect into the representation in some way. I have yet to see the entity Customer containing a firstname, lastname and emailAddress property, and those being completely irrelevant in the representation.
The introduction of DTOs doesn't guarantee a decoupling by no means. I've seen way too many projects where they where introduced for cargo-culting reasons, simply duplicated all fields of the entity backing them and by that just caused additional effort because every new field had to be added to the DTOs as well. But hey, decoupling! Not. ¯\_(ツ)_/¯
That said, there are of course situations where you'd want to slightly tweak the representation of those properties, especially if you use strongly typed value objects for e.g. an EmailAddress (good!) but still want to render this as a plain String in JSON. But by no means is that a problem: Spring Data REST uses Jackson under the covers which offers you a wide variety of means to tweak the representation — annotations, mixins to keep the annotations outside your domain types, custom serializers etc. So there is a mapping layer in between.
Not using DTOs by default is not a bad thing per se. Just imagine the outcry by users about the amount of boilerplate necessary if we required DTOs to be written for everything! A DTO is just one means to an end. If that end can be achieved in a different way (and it usually can), why insist on DTOs?
Just don't use Spring Data REST where it doesn't fit your requirements.
Continuing on the customization efforts it's worth noticing that Spring Data REST exists to cover exactly the parts of the API, that just follow the basic REST API implementation patterns it implements. And that functionality is in place to give you more time to think about
How to shape your domain model
Which parts of your API are better expressed through hypermedia driven interactions.
Here's a slide from the talk I gave at SpringOne Platform 2016 that summarizes the situation.
The complete slide deck can be found here. There's also a recording of the talk available on InfoQ.
Spring Data REST exists for you to be able to focus on the underlined circles. By no means we think you can build a great really API solely by switching Spring Data REST on. We just want to reduce the amount of boilerplate for you to have more time to think about the interesting bits.
Just like Spring Data in general reduces the amount of boilerplate code to be written for standard persistence operations. Nobody would argue you can actually build a real world app from only CRUD operations. But taking the effort out of the boring bits, we allow you to think more intensively about the real domain challenges (and you should actually do that :)).
You can be very selective in overriding certain resources to completely take control of their behavior, including manually mapping the domain types to DTOs if you want. You can also place custom functionality next to what Spring Data REST provides and just hook the two together. Be selective about what you use.
A sample
You can find a slightly advanced example of what I described in Spring RESTBucks, a Spring (Data REST) based implementation of the RESTBucks example in the RESTful Web Services book. It uses Spring Data REST to manage Order instances but tweaks its handling to introduce custom requirements and completely implement the payment part of the story manually.
Spring Data REST enables a very fast way to prototype and create a REST API based on a database structure. We're talking about minutes vs days, when comparing with other programming technologies.
The price you pay for that, is that your REST API is tightly coupled to your database structure. Sometimes, that's a big problem. Sometimes it's not. It depends basically on the quality of your database design and your ability to change it to suit the API user needs.
In short, I consider Spring Data REST as a tool that can save you a lot of time under certain special circumstances. Not as a silver bullet that can be applied to any problem.
We used to use DTOs including the fully traditional layering ( Database, DTO, Repository, Service, Controllers,...) for every entity in our projects. Hopping the DTOs will some day save our life :)
So for a simple City entity which has id,name,country,state we did as below:
City table with id,name,county,.... columns
CityDTO with id,name,county,.... properties ( exactly same as database)
CityRepository with a findCity(id),....
CityService with findCity(id) { CityRepository.findCity(id) }
CityController with findCity(id) { ConvertToJson( CityService.findCity(id)) }
Too many boilerplate codes just to expose a city information to client. As this is a simple entity no business is done at all along these layers, just the objects is passing by.
A change in City entity was starting from database and changed all layers. (For example adding a location property, well because at the end the location property should be exposed to user as json). Adding a findByNameAndCountryAllIgnoringCase method needs all layers be changed changed ( Each layer needs to have new method).
Considering Spring Data Rest ( of course with Spring Data) this is beyond simple!
public interface CityRepository extends CRUDRepository<City, Long> {
City findByNameAndCountryAllIgnoringCase(String name, String country);
}
The city entity is exposed to client with minimum code and still you have control on how the city is exposed. Validation, Security, Object Mapping ... is all there. So you can tweak every thing.
For example, if I want to keep client unaware on city entity property name change (layer separation), well I can use custom Object mapper mentioned https://docs.spring.io/spring-data/rest/docs/3.0.2.RELEASE/reference/html/#customizing-sdr.custom-jackson-deserialization
To summarize
We use the Spring Data Rest as much as possible, in complicated use cases we still can go for traditional layering and let the Service and Controller do some business.
A client/server release is going to publish at least two artifacts. This already decouples client from server. When the server's API is changed, applications do not immediately change. Even if the applications are consuming the JSON directly, they continue to consume the legacy API.
So, the decoupling is already there. The important thing is to think about the various ways a server's API is likely to evolve after it is released.
I primarily work with projects which use DTOs and numerous rigid layers of boilerplate between the server's SQL and the consuming application. Rigid coupling is just as likely in these applications. Often, changing anything in the DB schema requires us to implement a new set of endpoints. Then, we support both sets of endpoints along with the accompanying boilerplate in each layer (Client, DTO, POJO, DTO <-> POJO conversions, Controller, Service, Repository, DAO, JDBC <-> POJO conversion, and SQL).
I'll admit that there is a cost to dynamic code (like spring-data-rest) when doing anything not supported by the framework. For example, our servers need to support a lot of batch insert/update operations. If we only need that custom behavior in a single case, it's certainly easier to implement it without spring-data-rest. In fact, it may be too easy. Those single cases tend to multiply. As the number of DTOs and accompanying code grows, the inconsistencies eventually become extremely burdensome to maintain. In some non-dynamic server implementations, we have hundreds of DTOs and POJOs that are likely no longer used by anything. But, we are forced to continue supporting them as their number grows each month.
With spring-data-rest, we pay the cost of customization early. With our multi-layer hard-coded implementations, we pay it later. Which one is preferred depends on a lot of factors (including the team's knowledge and the expected lifetime of the project). Both types of project can collapse under their own weight. But, over time, I've become more comfortable with implementations (like spring-data-rest without DTOs) that are more dynamic. This is especially true when the project lacks good specifications. Over time, such a project can easily drown in the inconsistencies buried within its sea of boilerplate.
From the Spring documentation I don't see Spring data REST exposes entities, you are the one doing it.
Spring Data projects intend to ease the process of accessing different data sources, but you are the one deciding which layer to expose on Spring Data Rest.
Reorganizing your project will help to solve your issue.
Every #Repository that you create with Spring data represents more a DAO in the sense of design than a Repository. Each one is tightly coupled with a particular Data source you want to reach out to. Say JPA, Mongo, Redis, Cassandra,...
Those layers are meant to return entity representations or projections.
However if you check out the Repository pattern from a design perspective you should have a higher layer of abstraction from those specific DAOs where your app use those DAOs to get info from as many different sources it needs, and builds business specific objects for your app (Those might looks more like your DTOs).
That is probably the layer you want to expose on your Spring Data Rest.
NOTE: I see an answer recommending to return Entity instances only because they have the same properties as the DTO. This is normally a bad practice and in particular is a bad idea in Spring and many other frameworks because they do not return your actual classes, they return proxy wrappers so that they can work some magic like lazy loading of values and the likes.

Is it possible to inject too many repositories into a controller?

I have the first large solution that I am working on using MVC3. I am using ViewModels, AutoMapper, and DI.
To create my ViewModels for some of the more complex edit/creates I am injecting 10 or so
repositories. For all but one of the repositories they are only there to get the data to populate a select list on the ViewModel as I am simply getting associated FK entities etc.
I've seen it mentioned that injecting large numbers of repositiories is bad practice and I should refactor. How many is to many? is this to many? How should I refactor? Should I create a dedicated service that returns select lists etc?
Just to to give an example here is the the constructor for my RequirementsAndOffer Controller
public RequirementsAndOfferController(
IdefaultnoteRepository defaultnoteRepository,
IcontractformsRepository contractformsRepository,
IperiodRepository periodRepository,
IworkscopeRepository workscopeRepository,
IcontactRepository contactRepository,
IlocationRepository locationRepository,
IrequirementRepository requirementRepository,
IContractorRepository contractRepository,
IcompanyRepository companyRepository,
IcontractRepository contractRepository,
IrequirementcontracttypeRepository requirementcontracttypeRepository,
IoffercontractRepository offercontractRepository)
All of the above populate the selects apart from the requirementRepository and offercontractRepository which I use to get the requirements and offers.
Update
General thoughts and updates. I was encouraged to consider this issue by Mark Seemann blog article on over injection. I was interested in specifically the repositories and why I was having to inject this number. I think having considered my design I am clearly not using one repository for each aggregate root (as per DDD).
I have for example cars, and cars have hire contracts, and hire contracts have hire periods.
I was creating a repository for cars, hire contracts, and hire periods. So that was creating 3 repositories when I think there should only be one. hire contracts and periods can't exist without cars. Therefore I have reduced some repositories that way.
I am still left with some complex forms (the customer is demanding these large forms) that are requiring a number of repositories in the controller. This maybe is because I haven't refactored enough. As far as I can see though I am going to need separate repositories to get the select lists.
I'm considering options for creating some sort of service that will provide all the select lists I need. Is that good practice/bad practice? Should my services only be orientated around aggregate roots? If so having one service providing selects would be wrong. However the selects do seem to be the same type of thing and grouping them together is attractive in some ways.
Would seem my question is similar to how-to-deal-with-constructor-over-injection-in-net
I guess I am now more looking for specific advice on whether a Select List service is good or bad.
Any advice appreciated.
You have the right idea starting with a repository pattern. Depending on how you use your repositories, I completely understand how you might end up with a lot (maybe even 1 per database table.)
You'll have to be the judge of your own specifications and business requirements, but perhaps you can consider a business layer or service layer.
Business Layer
This layer might be composed of business objects that encapsulate one or more intents of your view models (and inherently views.) You didn't describe any of your domain, but maybe a business object for Users could include some CRUD methods. Then, your view models would rely on these business objects instead of directly calling the repository methods. You may have already guessed the refactoring would move calls to repository methods into the business objects
Service Layer
A service layer might even use some of the business objects described above, but perhaps you can design some type of messaging protocol/system/standard to communicate between your web app and maybe a WCF service running on the server to control state of some kind.
This isn't the most descriptive of examples, but I hope it helps with a very high level view of refactoring options.

Architecture : layer responsibility and communication with modularity

Im currently trying to design an architecture for my new webapp project that has this kind of concept :
consists of several big modules that are independent from one another, but can still communicate and affecting one another.
For example, i could enable the purchasing module along with production module in my webapp, and let's assume the modules could communicate with one another.
But then i could activate only the purchasing module, but disabling production module in the webapp, just from configuring it, without changing any of the code., and the purchasing module will still work fine (independent from the production module)
Here's what i've been thinking about for the architectural layers to support this kind of application :
The UI Layer
JSF 2.0 + Primefaces widgets
Requestscoped ManagedBean + Flash object to transfer data between pages
The ManagedBean will deal with the UI states, UI validations, but not with the business logic operations
The ManagedBean also has access to the service layer, injected by Spring
ManagedBean could have simple fields (like string, integer, etc), or view models (to encapsulate some related fields), or even the Entity models, which should be a transient object in the beginning, and becoming a detached object once having get in and persisted and get out of a transaction.
These fields combinations could be used based on the situation, and the validations, for example, like the #Required, will be placed in the ManageBean's setter method. The Entity model could have #NotNull or #Size within the fields.
The entities in my thinking is only JPA POJOs with the JPA annotations defining the relationships between the entities, without any behaviours, except those validations defined by the the annotations also.
The Service Layer
This layer will handle the business logic validations and operations
Modularity : Could also call other service layer for other modules where he other modules could be non-existent, if disabled via configuration. Perhaps this can be achieved via nother layer for the communication between modules, or perhaps i could use Spring to inject empty implementations for the disabled modules ?
Input : It can accept Entity models, or plain variables, or view models
Output : The return value could vary from void, Entity, a list of Entities (to be displayed later in a datatable in JSF), and could be plain variables like boolean, string, integer, etc.
In the future, this layer will also provide web services for mobile devices or other kind of language that support web service (i still dont know how, but i think this is possible, even if the method accept objects or entities as the parameters)
Each service object will have DAO instance injected by Spring, and will call the DAO for data operations, like CRUD operations, querying, etc
The DAO Layer
Will have the data operations like CRUD operations, querying (jpql, named query, criteria query, native sql query, stored proecure calls) etc
Input : It can accept Entity models, or plain variables, or view models
Output : The return value could vary from void, Entity, a list of Entities (to be displayed later in a datatable in JSF), and could be plain variables like boolean, string, integer, etc.
Having one DAO for each entity is the norm, but when dealing with multiple tables in a single data operation, i'd have to introduce new DAOs.
Will have the EntityManager injected by Spring
These are the things i have in mind, and with this, i tried doing some googling around these topics, and found out many other stuffs like :
Doman Driven Design (DDD), where the entities could have persisting logics in them ? I think this is also the active record pattern ? Spring roo seems to be generating this kind of model also. It seems to be the opposite of Anemic Domain Model.
The data transfer object (DTO), encapsulating the communication data between layers, avoiding the lazy initialization problems with the unloaded fetchtype lazy hierarchies when using JPA ? Open Session in the View seems to be have it's own PROs and CONs also in solving the lazy exception.
And some would say you dont need the DAO anymore, as described in the spring roo documentation
And with all these matters, please share your thinking my current design when it comes to these :
Speed of development, with me thinking about having less boilerplate because being able to make use of the Entities, converting to-and-from DTOs
Ease of maintenance, with me thinking about having clear separation between ui state/logic, business process logic, data operations layer
Support for the modularization, perhaps using maven with each module as one artifact, depending one another as needed ? <-- this is where it's all very foggy for me
Webservice in the future. I have never tried webservices before, but i can just assume, public methods in the service layers could be exported as webservices, so they could be called from mobile devices, or any other platforms that support webservice call ?
Could you please share your experience in this matter ?
Find an OR Mapper you like and don't devote any more attention to the data layer. That is mostly a solved problem, and most of the attention you devote to that will be reinventing the wheel. Very people write applications whose CRUD needs are so unique that they obviate ORM use these days.
Some of the same advice for the UI - find tools and frameworks rather than spending too much time on all of that, there's a lot of good development wealth in place there.
So, concentrate on the service layer, where the unique nature of your application is really expressed. But we can't really validate or critique your service layer because we don't know anything about the problem you're trying to solve. All of the things you've listed are certainly good approaches for certain problems, certain sets of trade-offs, etc. Without knowing more about what matters (performance / development time / configurability / robustness / clarity), nobody can tell you what the right set of choices is.
On your "output" item - other devices can support communication with your app as long as everything serializes down to a common format, usually XML. Then you just send it over the wire, and rehydrate it on the other end.
Software development, when it is non-trivial is a Wicked Problem. It is likely that much advice that you get would need to be thrown out halfway through your project. I don't generally believe in grand architectures - focus on solving particular problems as well as you can, and if you're lucky, a pattern will emerge that you can take advantage. Anything more is generally hubris.

Why use service layer?

I looked at the example on http://solitarygeek.com/java/developing-a-simple-java-application-with-spring/comment-page-1#comment-1639
I'm trying to figure out why the service layer is needed in the first place in the example he provides. If you took it out, then in your client, you could just do:
UserDao userDao = new UserDaoImpl();
Iterator users = userDao.getUsers();
while (…) {
…
}
It seems like the service layer is simply a wrapper around the DAO. Can someone give me a case where things could get messy if the service layer were removed? I just don’t see the point in having the service layer to begin with.
Having the service layer be a wrapper around the DAO is a common anti-pattern. In the example you give it is certainly not very useful. Using a service layer means you get several benefits:
you get to make a clear distinction between web type activity best done in the controller and generic business logic that is not web-related. You can test service-related business logic separately from controller logic.
you get to specify transaction behavior so if you have calls to multiple data access objects you can specify that they occur within the same transaction. In your example there's an initial call to a dao followed by a loop, which could presumably contain more dao calls. Keeping those calls within one transaction means that the database does less work (it doesn't have to create a new transaction for every call to a Dao) but more importantly it means the data retrieved is going to be more consistent.
you can nest services so that if one has different transactional behavior (requires its own transaction) you can enforce that.
you can use the postCommit interceptor to do notification stuff like sending emails, so that doesn't junk up the controller.
Typically I have services that encompass use cases for a single type of user, each method on the service is a single action (work to be done in a single request-response cycle) that that user would be performing, and unlike your example there is typically more than a simple data access object call going on in there.
Take a look at the following article:
http://www.martinfowler.com/bliki/AnemicDomainModel.html
It all depends on where you want to put your logic - in your services or your domain objects.
The service layer approach is appropriate if you have a complex architecture and require different interfaces to your DAO's and data. It's also good to provide course grained methods for clients to call - which call out to multiple DAO's to get data.
However, in most cases what you want is a simple architecture so skip the service layer and look at a domain model approach. Domain Driven Design by Eric Evans and the InfoQ article here expand on this:
http://www.infoq.com/articles/ddd-in-practice
Using service layer is a well accepted design pattern in the java community. Yes, you could straightaway use the dao implementation but what if you want to apply some business rules.
Say, you want to perform some checks before allowing a user to login into the system. Where would you put those logics? Also, service layer is the place for transaction demarcation.
It’s generally good to keep your dao layer clean and lean. I suggest you read the article “Don’t repeat the DAO”. If you follow the principles in that article, you won’t be writing any implementation for your daos.
Also, kindly notice that the scope of that blog post was to help beginners in Spring. Spring is so powerful, that you can bend it to suit your needs with powerful concepts like aop etc.
Regards,
James

in MVC/MVP/MVPC where do you put your business logic?

in the MVC/MVP/MVPC design pattern where do you put your business logic? No, I do not mean the ASP.NET MVC Framework (aka "Tag Soup").
Some people say you should put it in the "Controller" in MVC/MVPC or "Presenter". But, others think it should be part of the Model.
What do you think and why?
This is how I see it:
The controller is for application logic; logic which is specific to how your application wants to interact with the domain of knowledge it pertains to.
The model is for logic that is independent of the application. i.e. logic that is valid in all possible applications of the domain of knowledge it pertains to.
Hence nearly all business rules will be in the model.
I find a useful question to ask myself when I need to decide where to put some logic is "is this always true, or just for the part of the application I am currently coding?"
The way I have my ASP.NET MVC application structure the workflow looks like this:
Controller -> Services -> Repositories
The Services layer above is where all the business logic takes place. If you put your business logic in your Controller layer, you lose the ability to re-use that business logic in other controllers.
I don't believe it belongs in the controller, because once it's embedded there it can't get out.
I think MVC should have another layer injected in-between: a service layer that maps to use cases. It contains business logic, knows about units of work and transactions, and deals with model and persistence objects to accomplish its tasks.
The controller has a reference to the service that it needs to fulfill its use case. It worries about unmarshalling requests into objects the service can deal with, calls the service, and marshals the response to send back to the view.
With this arrangement, the service is usable on its own even without the controller/view pair. It can be a local or remote object, packaged and deployed any way you wish, that the controller deals with.
The controller now becomes bound more closely to the view. After all, the controller you'll use for a desktop is likely to be different than the one for a web app.
I think this design is more service-oriented.
Put your business logic in domain and keep your domain separte. I prefered Presenter -> Command (Message command use NServiceBus) -> Domain (with BC Bounded Context) -> EventStore -> Event handler -> Repository (read model). If logic is not fit in domain then use service layer.
Please read article from Martin fowler, Eric Evan, Greg Young and Udi dahan. They have define very clear picture.
Here is article written by Mark Nijhof : http://elegantcode.com/2009/11/11/cqrs-la-greg-young/
By all means, put it in the model!
Of course some of the user interaction will have to be in the view, that will be related to your app and business logic, but avoid this as much as possible. Ironically following the principle of doing as little as possible as the user is 'working' in your GUI and as much during 'submit' or action requests makes for a better user experience and usability, not the other way around. At least for line-of-business apps.
You can put it in two places. The Controller and in the Presentation layer. By having some of the logic in the Presentation layer, you can limit the number of requests back into the architecture which adds load to the system. Yeah, you have to code twice, but sometimes this is what you need for a responsive user experience.
I kinda like what was said here (http://www.martinhunter.co.nz/articles/MVPC.pdf)
"With MVPC, the presenter component of the MVP model is split into two
components: the presenter (view control logic) and controller (abstract purpose control logic)."

Resources