Yes or no: Should models in MVC contain application logic? - model-view-controller

Yesterday I had some discussion with one of our developers regarding MVC, more precisely about the role of the model component in MVC.
In my opinion, a model should just contain properties and almost no functionality so there are as few methods in model classes as possible.
My collegue though believes that models could and should have more than that and offer a lot more functionality.
Here is an example we argued about.
Example 1
Let's say we wanted to create a blog. A blog has articles and tags. Each article can have multiple tags and each tag can belong to multiple articles. So we have a m:n relation here.
In pseudocode it'd probably look something like this:
class Article{
public int id;
public String title;
public String content;
public Tag[] tags;
// Constructor
public void Article(id, title, content, tags){
this.id = id;
this.title = title;
this.content = content;
this.tags = tags;
}
}
class Tag{
public int id;
public String name;
// Constructor
public Tag(id, name){
this.id = id;
this.name = name;
}
}
Now, assume that we're working loose coupled here which means that it could happen that we have an instance of Article which has no Tags yet so we'll use an Ajax call (to our backend which has a database containing all the information) to get the tags that belong to our article.
Here comes the tricky part. I believe that getting the backend data via Ajax+JSON should be the controller's job using a dedicated class which deals with the ajax request using a parser:
class MyController{
private void whatever(articleID){
Article article = (Article) ContentParser.get(articleID, ContentType.ARTICLE);
doSomethingWith(article);
}
}
public abstract class ContentParser{
public static Object get(int id, ContentType type){
String json = AjaxUtil.getContent(id, type.toString()); // Asks the backend to get the article via JSON
Article article = json2Article(json);
// Just in case
Tag[] tags = article.tags;
if (tags == null || tags.length <= 0){
json = AjaxUtil.getContent(article.id, ContentType.TAGS); // Gets all tags for this article from backend via ajax
tags = json2Tags(json);
article.tags = tags;
}
return article;
}
// Does funky magic and parses the JSON string. Then creates a new instance of Article
public static Article json2Article(String json){
/*
...
*/
return new Article(id, title, content, tags);
}
// Does funky magic and parses the JSON string. Then creates a new instance of Tag
public static Tag[] json2Tags(String json){
/*
...
*/
return tags;
}
}
Example 2
My collegue believes that this breaks with the idea of MVC, he suggests that the model should take care about this:
class Blog{
public int id;
public String title;
public Article[] articles;
// Constructor
public Blog(id, title, articles){
this.id = id;
this.title = title;
this.articles = articles;
}
public void getArticles(){
if (articles == null || articles.length <= 0){
String json = AjaxUtil.getContent(id, ContentType.ARTICLE); // Gets all articles for this blog from backend via ajax
articles = json2Articles(json);
}
return articles;
}
private Article[] json2Articles(String json){
/*
...
*/
return articles;
}
}
class Article{
public int id;
public String title;
public String content;
public Tag[] tags;
// Constructor
public Article(id, title, content, tags){
this.title = title;
this.content = content;
this.tags = tags;
}
public Tag[] getTags(){
if (tags == null || tags.length <= 0){
String json = AjaxUtil.getContent(id, ContentType.TAGS); // Gets all tags for this article from backend via ajax
tags = json2Tags;
}
return tags;
}
// Does funky magic and parses the JSON string. Then creates a new instance of Tag
private Tag[] json2Tags(String json){
/*
...
*/
return tags;
}
}
And outside of the model you'd do: blog.getArticles(); or article.getTags(); to get the tags without bothering with the ajax call.
However, as handy as this might be I believe that this approach breaks with MVC because at the end of the day all models will be full of methods that do various funky stuff and the controller and helper classes do almost nothing.
In my understanding of MVC, models should only contain properties and a minimum of "helper methods" inside. For example a model "Article" could offer a method getNumOfTags() but it shouldn't do any Ajax calls on its own.
So, which approach is correct?

Generally I try to keep controllers simple in terms of logic too. If business logic is required, it will go up to 'service layer' classes to handle it. This also saves repeating any code/logic too, which ultimately makes the whole project more maintainable if business logic was to change. I just keep models purely as entity objects.
I think the answer above sums it up nicely though, it is easy to over engineer a project based on design patterns: Go with whatever works for you and is most maintainable/efficient.

You should stop treating "model" in MVC like some class. Model is not a class or object. Model is a layer (in modern MVC, there has been some evolutions since the inception of concept). What people tend to call "models" are actually domain object (I blame Rails for this mass-stupidity).
The application logic (interaction between domain logic structures and storage abstraction) should be a part of model layer. To be more precise: it should be inside the Services.
The interaction between presentation layer (controllers, views, layouts, templates) and model layer should happen only through those services.
Application has no place in controllers. Controllers are structures of presentation layer, and they are responsible for handling user input. Please do not expoqbse deomain objects to it.

Correct? Either. They both compile, don't they?
Handy tricks are nice, why not use them were you can? That being said, as you've pointed out, you may get bloated models if you put all sorts of logic in them. Likewise, though, you can get bloated controllers when they do massive amounts in each action. There are ways to abstract elements out of either, if that's necessary too.
At the end of the day all design patterns are is guidelines. You shouldn't blindly follow any rule, just because someone else said it. Do what works for you, what you think gives clean, extensible code and hits whatever metrics you think make good code.
All that being said, for true idealistic MVC I would say that models should not have any external actions, they're data representations, nothing more. But feel free to disagree :-)

Your suggestion about the modules (without any business logic inside) sounds more like you talk about Value Objects. The suggestion of your college sounds more like Domain Objects.
In my opinion the concept which will be used depends on the framework which is used (that's the practical view, the more philosophical one is bellow). If framework is used it usually sets rules about how you should implement each component.
For example we can look at different MVC frameworks. In Flex's Cairngorm framework we have both. VO (Value objects) are primary used for bindings to the view, while the DO (Domain objects) hold the business logic. If we look at ASP.NET's MVC implementation there we have a model which contains at least the data (VO) but also some validation (if required). Let us look at a UI MV* framework - for example Backbone.js. Backbone's documentation says:
Models are the heart of any JavaScript application, containing the
interactive data as well as a large part of the logic surrounding it:
conversions, validations, computed properties, and access control.
If we look into the traditional MVC provided by Smalltalk we see that: "Model: manages the behavior and data of the application domain" so we have some behavior in it, not just plain data.
Let's think practically, if don't have any logic in the Model probably we should put all the application and business logic into the Controller.
Now let's focus on a concrete example. Imagine we have a model which is a Graph. We want to find the shortest path between two nodes in it. A good question is where to put the algorithm which finds the shortest path? It's a kind of business logic, right? If we look at the main benefits of MVC (code reuse, DRY etc.) we can see that if we want to reuse our model in the best possible way we should implement the shortest path inside it. The shortest path algorithm usually depends on the graph inner representation (or at least for best performance of the algorithm) but this representation is encapsulated into the model, unfortunately we cant reuse the full shortest path for matrix representation and list of neighbors so it's not a good idea to put it into the controller.
So as conclusion I can said that it depends on your needs (mostly). The traditional MVC purpose is to be used in the UI (inside GoF
The Model/View/Controller (MVC) triad of classes [first described by Krasner and Pope in >1988] is used to build user interfaces in Smalltalk-80.
)
now we use it in different areas - only UI, for web applications, etc. It cant be used in it's pure form because of that.
But anyway, in my opinion the best separation of concerns can be achieved by isolation of the business logic into the Model and the application logic into the Controller.

In short I believe the Model should just be data that will be sent to your View. It helps drive the MVC paradigm into other aspects of your application.
If you are tring not to break the MVC pattern your data should all be returned as a Business Model to your controller and unpacked into your ViewModel. Request the information server side and then send everything. If you need to make JSon requests then that should either be a Rest Service or calls to a Controller. Having these getTags and getArticles makes it very messy ... if your view is now deciding on which to call ... I cant understand why you dont have that information isnt available upfront. Using static methods is the same approach just a different angle.
I have found it best to have my controller actions call an injected service which does the magic and use the Models within the MVC web application to return the information. This makes things neater and further emphasisis the seperation of concern. Your Controller Actions then become very lean and its clear what they are doing.
I believe starting by treating the Model as completly dumb might go a long way in sorting some of these architectural problems I am seeing from this code.

Yes. It should. You are talking about Domain Driven Design.
https://en.wikipedia.org/wiki/Domain-driven_design
If you feel, you are not doing that then you are doing Anaemic Domain Model Design. that is an Anti Pattern.
I read through an article from Martin Flower on how bad the Anaemic Domain Design is. https://martinfowler.com/bliki/AnemicDomainModel.html

Related

Web API is it necessary to have ViewModels layer classes?

When I use Web (MVC), I always to create a separate classes layer. These classes often the same as DTO classes, but with attributes like [Display(Name = "Street")] and validation. But for web api Display attributes are not necessary, validation can be used by FluentValidation. Should Api controller returns ViewModels classes or DTO classes will be fine too?
the answer, as always is .... it depends.
If your API is serving multiple clients , apps etc, then returning DTOs is a better options.
ViewModels are specific to the MVC client and should already be prepared for display, meaning the data should already be formatted in a specific way, some fields maybe combined, they should satisfy whatever requirements the display pages have. They are called ViewNodels for a reason. The point is that they are rarely exactly the same as the data the API returns, which should be a bit more generic and follow a certain pattern to make sense to its users.
If your ViewModels are exactly the same and you only have one client then it's up to you if you want to create a set of duplicated classed just to avoid having the attributes.
Mapping from DTO to ViewModel and viceversa is not exactly complicated, but the process does introduce one more complication, one more layer.
Don't forget one thing though. API DTOs are supposed to return the data they have on any entity regardless of the requirements of any UI. Requirements can change anyway, new fields added or discarded. You're more than likely to leave the API alone when that happens and simply change your ViewModels.
Your ViewModels are specific to a UI page and should contain only the data required by that page. This means that you can end up with multiple ViewModels for the same data, it's just that the display requirements are different for each.
My vote goes towards keeping the ViewModels and DTOs separate, even if, at this point in time they are exactly the same. Thins always change and this is one of those things you can actually be ready for.
Actually it depends on application's architecture how we want to return response. In this case yes we can return DTO classes but i think that would not be the good approach because we should create a separate Resource classes that will map with DTO and then return. Just see the below example:
public class CustomerDTO
{
public int ID { get; set; }
public string Name { get; set; }
public int DepartmentId { get; set; }
}
public class CustomerResource
{
[JsonObject]
public string Name { get; set; }
[JsonObject]
public string Department { get; set; }
}
Suppose we have CustomerDTO class and we want to return response in the following json format
{
"name":"Abc xyz",
"department":"Testing"
}
So in this case we should we have separate class that will return as a response to the end user as i created CustomerResource. In this scenario we will create a mapper that will map DTO with resource object.
And also with this implementation we can test resources independently

Rich vs Anemic Domain Model [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am deciding if I should use a Rich Domain Model over an Anemic Domain Model, and looking for good examples of the two.
I have been building web applications using an Anemic Domain Model, backed by a Service --> Repository --> Storage layer system, using FluentValidation for BL validation, and putting all of my BL in the Service layer.
I have read Eric Evan's DDD book, and he (along with Fowler and others) seems to think Anemic Domain Models are an anti-pattern.
So I was just really wanting to get some insight into this problem.
Also, I am really looking for some good (basic) examples of a Rich Domain Model, and the benefits over the Anemic Domain Model it provides.
The difference is that an anemic model separates logic from data. The logic is often placed in classes named **Service, **Util, **Manager, **Helper and so on. These classes implement the data interpretation logic and therefore take the data model as an argument. E.g.
public BigDecimal calculateTotal(Order order){
...
}
while the rich domain approach inverses this by placing the data interpretation logic into the rich domain model. Thus it puts logic and data together and a rich domain model would look like this:
order.getTotal();
This has a big impact on object consistency. Since the data interpretation logic wraps the data (data can only be accessed through object methods) the methods can react to state changes of other data -> This is what we call behavior.
In an anemic model the data models can not guarantee that they are in a legal state while in a rich domain model they can. A rich domain model applies OO principles like encapsulation, information hiding and bringing data and logic together and therefore a anemic model is an anti pattern from an OO perspective.
For a deeper insight take a look at my blog https://www.link-intersystems.com/blog/2011/10/01/anemic-vs-rich-domain-models/
Bozhidar Bozhanov seems to argue in favor of the anemic model in this blog post.
Here is the summary he presents:
domain objects should not be spring (IoC) managed, they should not have DAOs or anything related to infrastructure injected in them
domain objects have the domain objects they depend on set by hibernate (or the persistence mechanism)
domain objects perform the business logic, as the core idea of DDD is, but this does not include database queries or CRUD – only operations on the internal state of the object
there is rarely need of DTOs – the domain objects are the DTOs themselves in most cases (which saves some boilerplate code)
services perform CRUD operations, send emails, coordinate the domain objects, generate reports based on multiple domain objects, execute queries, etc.
the service (application) layer isn’t that thin, but doesn’t include business rules that are intrinsic to the domain objects
code generation should be avoided. Abstraction, design patterns and DI should be used to overcome the need of code generation, and ultimately – to get rid of code duplication.
UPDATE
I recently read this article where the author advocates of following a sort of hybrid approach - domain objects can answer various questions based solely on their state (which in the case of totally anemic models would probably be done in the service layer)
My point of view is this:
Anemic domain model = database tables mapped to objects (only field values, no real behavior)
Rich domain model = a collection of objects that expose behavior
If you want to create a simple CRUD application, maybe an anemic model with a classic MVC framework is enough. But if you want to implement some kind of logic, anemic model means that you will not do object oriented programming.
*Note that object behavior has nothing to do with persistence. A different layer (Data Mappers, Repositories e.t.c.) is responsible for persisting domain objects.
When I used to write monolithic desktop apps I built rich domain models, used to enjoy building them.
Now I write tiny HTTP microservices, there's as little code as possible, including anemic DTOs.
I think DDD and this anemic argument date from the monolithic desktop or server app era. I remember that era and I would agree that anemic models are odd. I built a big monolithic FX trading app and there was no model, really, it was horrible.
With microservices, the small services with their rich behaviour, are arguably the composable models and aggregates within a domain. So the microservice implementations themselves may not require further DDD. The microservice application may be the domain.
An orders microservice may have very few functions, expressed as RESTful resources or via SOAP or whatever. The orders microservice code may be extremely simple.
A larger more monolithic single (micro)service, especially one that keeps it model in RAM, may benefit from DDD.
First of all, I copy pasted the answer from this article
http://msdn.microsoft.com/en-gb/magazine/dn385704.aspx
Figure 1 shows an Anemic Domain Model, which is basically a schema with getters and setters.
Figure 1 Typical Anemic Domain Model Classes Look Like Database Tables
public class Customer : Person
{
public Customer()
{
Orders = new List<Order>();
}
public ICollection<Order> Orders { get; set; }
public string SalesPersonId { get; set; }
public ShippingAddress ShippingAddress { get; set; }
}
public abstract class Person
{
public int Id { get; set; }
public string Title { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string CompanyName { get; set; }
public string EmailAddress { get; set; }
public string Phone { get; set; }
}
In this richer model, rather than simply exposing properties to be read and written to,
the public surface of Customer is made up of explicit methods.
Figure 2 A Customer Type That’s a Rich Domain Model, Not Simply Properties
public class Customer : Contact
{
public Customer(string firstName, string lastName, string email)
{
FullName = new FullName(firstName, lastName);
EmailAddress = email;
Status = CustomerStatus.Silver;
}
internal Customer()
{
}
public void UseBillingAddressForShippingAddress()
{
ShippingAddress = new Address(
BillingAddress.Street1, BillingAddress.Street2,
BillingAddress.City, BillingAddress.Region,
BillingAddress.Country, BillingAddress.PostalCode);
}
public void CreateNewShippingAddress(string street1, string street2,
string city, string region, string country, string postalCode)
{
ShippingAddress = new Address(
street1,street2,
city,region,
country,postalCode)
}
public void CreateBillingInformation(string street1,string street2,
string city,string region,string country, string postalCode,
string creditcardNumber, string bankName)
{
BillingAddress = new Address (street1,street2, city,region,country,postalCode );
CreditCard = new CustomerCreditCard (bankName, creditcardNumber );
}
public void SetCustomerContactDetails
(string email, string phone, string companyName)
{
EmailAddress = email;
Phone = phone;
CompanyName = companyName;
}
public string SalesPersonId { get; private set; }
public CustomerStatus Status { get; private set; }
public Address ShippingAddress { get; private set; }
public Address BillingAddress { get; private set; }
public CustomerCreditCard CreditCard { get; private set; }
}
One of the benefit of rich domain classes is you can call their behaviour (methods) everytime you have the reference to the object in any layer. Also, you tend to write small and distributed methods that collaborate together. In anemic domain classes, you tend to write fat procedural methods (in service layer) that are usually driven by use case. They are usually less maintainable compared to rich domain classes.
An example of domain classes with behaviours:
class Order {
String number
List<OrderItem> items
ItemList bonus
Delivery delivery
void addItem(Item item) { // add bonus if necessary }
ItemList needToDeliver() { // items + bonus }
void deliver() {
delivery = new Delivery()
delivery.items = needToDeliver()
}
}
Method needToDeliver() will return list of items that need to be delivered including bonus. It can be called inside the class, from another related class, or from another layer. For example, if you pass Order to view, then you can use needToDeliver() of selected Order to display list of items to be confirmed by user before they click on save button to persist the Order.
Responding To Comment
This is how I use the domain class from controller:
def save = {
Order order = new Order()
order.addItem(new Item())
order.addItem(new Item())
repository.create(order)
}
The creation of Order and its LineItem is in one transaction. If one of the LineItem can't be created, no Order will be created.
I tend to have method that represent a single transaction, such as:
def deliver = {
Order order = repository.findOrderByNumber('ORDER-1')
order.deliver()
// save order if necessary
}
Anything inside deliver() will be executed as one single transaction. If I need to execute many unrelated methods in a single transaction, I would create a service class.
To avoid lazy loading exception, I use JPA 2.1 named entity graph. For example, in controller for delivery screen, I can create method to load delivery attribute and ignore bonus, such as repository.findOrderByNumberFetchDelivery(). In bonus screen, I call another method that load bonus attribute and ignore delivery, such as repository.findOrderByNumberFetchBonus(). This requires dicipline since I still can't call deliver() inside bonus screen.
I think the root of the problem is in false dichotomy. How is it possible to extract these 2 models: rich and "anemic" and to contrast them to each other? I think it's possible only if you have a wrong ideas about what is a class. I am not sure, but I think I found it in one of Bozhidar Bozhanov videos in Youtube. A class is not a data + methods over this data. It's totally invalid understanding which leads to the division of classes into two categories: data only, so anemic model and data + methods - so rich model (to be more correct there is a 3rd category: methods only even).
The true is that class is a concept in some ontological model, a word, a definition, a term, an idea, it's a DENOTAT. And this understanding eliminates false dichotomy: you can not have ONLY anemic model or ONLY rich model, because it means that your model is not adequate, it's not relevant to the reality: some concepts have data only, some of them have methods only, some of them are mixed. Because we try to describe, in this case, some categories, objects sets, relations, concepts with classes, and as we know, some concepts are processes only (methods), some of them are set of attributes only (data), some of them are relations with attributes (mixed).
I think an adequate application should include all kinds of classes and to avoid to fanatically self-limited to just one model. No matter, how the logic is representing: with code or with interpretable data objects (like Free Monads), anyway: we should have classes (concepts, denotats) representing processes, logic, relations, attributes, features, data, etc. and not to try to avoid some of them or to reduce all of them to the one kind only.
So, we can extract logic to another class and to leave data in the original one, but it has not sense because some concept can include attributes and relations/processes/methods and a separating of them will duplicate the concept under 2 names which can be reduced to patterns: "OBJECT-Attributes" and "OBJECT-Logic". It's fine in procedural and functional languages because of their limitation but it's excessive self-restraint for a language that allows you to describe all kinds of concepts.
Anemic domain models are important for ORM and easy transfer over networks (the life-blood of all comercial applications) but OO is very important for encapsulation and simplifying the 'transactional/handling' parts of your code.
Therefore what is important is being able to identify and convert from one world to the other.
Name Anemic models something like AnemicUser, or UserDAO etc so developers know there is a better class to use, then have an appropriate constructor for the none Anemic class
User(AnemicUser au)
and adapter method to create the anemic class for transporting/persistence
User::ToAnemicUser()
Aim to use the none Anemic User everywhere outside of transport/persistence
The classical approach to DDD doesn't state to avoid Anemic vs Rich Models at all costs. However, MDA can still apply all DDD concepts (bounded contexts, context maps, value objects, etc.) but use Anemic vs Rich models in all cases. There are many cases where using Domain Services to orchestrate complex Domain Use Cases across a set of domain aggregates as being a much better approach than just aggregates being invoked from application layer. The only difference from the classical DDD approach is where does all validations and business rules reside? There’s a new construct know as model validators. Validators ensure the integrity of the full input model prior to any use case or domain workflow takes place. The aggregate root and children entities are anemic but each can have their own model validators invoked as necessary, by it’s root validator. Validators still adhered to SRP, are easy to maintain and are unit testable.
The reason for this shift is we’re now moving more towards an API first vs an UX first approach to Microservices. REST has played a very important part in this. The traditional API approach (because of SOAP) was initially fixated on a command based API vs. HTTP verbs (POST, PUT, PATCH, GET and DELETE). A command based API fits well with the Rich Model object oriented approach and is still very much valid. However, simple CRUD based APIs, although they can fit within a Rich Model, is much better suited with simple anemic models, validators and Domain Services to orchestrate the rest.
I love DDD in all that it has to offer but there comes a time you need stretch it a bit to fit constantly changing and better approach to architecture.
Here is a example that might help:
Anemic
class Box
{
public int Height { get; set; }
public int Width { get; set; }
}
Non-anemic
class Box
{
public int Height { get; private set; }
public int Width { get; private set; }
public Box(int height, int width)
{
if (height <= 0) {
throw new ArgumentOutOfRangeException(nameof(height));
}
if (width <= 0) {
throw new ArgumentOutOfRangeException(nameof(width));
}
Height = height;
Width = width;
}
public int area()
{
return Height * Width;
}
}

the best way to maintain html snippets in ASP.NET MVC to return in ajax calls

i'm looking for a best practices type answer here. basically i have a very chatty application which will be returning bits of data to the client very often. the bits of data returned eventually will end up being html added dynamically to the dom. so i'm trying to choose between the following 2 ways:
return just json data, create the html on the client side using jquery and possibly jquery templates
return the actual html that is build on the server side
i would like to make the choice that is most easily maintained. that is, i want the best way that will allow me to make updates to the html snippets very often.
i'm actually looking for a way to do #2 using ASP MVC partial views and want the ability to use string formatting. essentially i'm looking to make a call like this:
string sHtml = string.Format(GetNewTradeHtml(), "GOOG", "100", "635.50");
and I want GetNewTradeHtml() to actually get the html from a ASP MVC view instead of a string constant that might look like:
const string cNewTradeHtml = "<li><span>Symbol: {0}</span><span>Qty: {1}</span><span>Price: {2}</span></li>";
the string constants seems to be a popular way to do these kinds of things and i hate maintaining those...
basically i think i'm looking for a way to manage view several view templates that i can call ToString() on and get the raw html and use string formatting on it. and i'm hoping there is a suggested way to solve my particular problem natively in ASP MVC (without some hack). but perhaps (unfortunately) the string constants + string.format is the best way to maintain server side dynamic html...
UPDATE:
here's what i've learned since i've posted this question:
there are LOTS of posts here on SO about rendering a view into a string. a lot of different ways, some work with different versions of MVC some don't. some are cleaner than others, some are pretty heavy... ALL of which are normally some type of solution that require a controller context. so in most cases the solutions work great as responses to requests. but for my case, i need to do it outside of the context of a controller so now i need to either mock the controller or make a bunch of fake objects, neither of which i really want to deal with.
so i've determined that there is actually NO easy way to render a razor partial into its string representation without using a controller in a response. they really need to make an easy way to do this without mocking up controller context and request objects.
What are Views in asp.net mvc? They are just html templates, nothing more. They take model and replace template placeholders with model values. And indeed there's no more natural way to render html in asp.net mvc than using Views.
First, declare your view model
public class NewTradeViewModel
{
public string Symbol { get; set; }
public decimal Quantity { get; set; }
public decimal Price { get; set; }
}
than your controller action
public ViewResult GetNewTrade()
{
NewTradeViewModel model = new NewTradeViewModel;
model.Symbol = "GOOG";
model.Quantity = "100";
model.Price = 635.50m;
// PartialView, as you want just html snippets, not full layouts with master pages, etc
return PartialView("TemplateViewName", model);
}
and the very ordinary view - you may have any number of these, just change controller action to return specific one
#model NewTradeViewModel
<li><span>Symbol: #Model.Symbol</span><span>Qty: #Model.Quantity</span><span>Price: #Model.Price</span></li>
Since you mentioned that your app was "chatty" you should probably consider returning Json and rendering on the client side with a template engine.
This is really a toss up though because it looks like your snippets are pretty small.
If you do go with a sending JSON back and forth, I can recommend jquery templates or mustache
backbone.js can also help you better organize your client side components. It is pretty easy to get up and running with it. By default it works with jquery templates, but you can also plug in other templates if you like.
Here is a simple approach to storing templates in separate files, http://encosia.com/using-external-templates-with-jquery-templates/
ijjo,
Just looked at your question again and notice that you are referring to returning the html partialview as a string. there are loads of references here on SO to this type od function, but below is my version taken from an 'old' mvc app that's still in production. without further ado, it's an extension method that hooks into the controller:
public static class ExtensionMethods
{
public static string RenderPartialToString(this ControllerBase controller, string partialName, object model)
{
var vd = new ViewDataDictionary(controller.ViewData);
var vp = new ViewPage
{
ViewData = vd,
ViewContext = new ViewContext(),
Url = new UrlHelper(controller.ControllerContext.RequestContext)
};
ViewEngineResult result = ViewEngines
.Engines
.FindPartialView(controller.ControllerContext, partialName);
if (result.View == null)
{
throw new InvalidOperationException(
string.Format("The partial view '{0}' could not be found", partialName));
}
var partialPath = ((WebFormView)result.View).ViewPath;
vp.ViewData.Model = model;
Control control = vp.LoadControl(partialPath);
vp.Controls.Add(control);
var sb = new StringBuilder();
using (var sw = new StringWriter(sb))
{
using (var tw = new HtmlTextWriter(sw))
{
vp.RenderControl(tw);
}
}
return sb.ToString();
}
}
usage (as per archil's example above):
public ViewResult GetNewTrade()
{
NewTradeViewModel model = new NewTradeViewModel;
model.Symbol = "GOOG";
model.Quantity = "100";
model.Price = 635.50m;
// PartialView, as you want just html snippets, not full layouts with master pages, etc
return this.RenderPartialToString("TemplateViewName", model);
}
good luck and enjoy...

Model binding in controller when form is posted - why to use view model instead of class from domain model?

I'm still reasonably new to ASP.NET MVC 3. I have come across view models and their use for passing data from a controller to the view. In my recent question on model binding two experts suggested that I should use view models for model binding as well.
This is something I haven't come across before. But both guys have assured me that it is best practise. Could someone maybe shed some light on the reasons why view models are more suitable for model binding?
Here is an example situation: I have a simple class in my domain model.
public class TestParent
{
public int TestParentID { get; set; }
public string Name { get; set; }
public string Comment { get; set; }
}
And this is my controller:
public class TestController : Controller
{
private EFDbTestParentRepository testParentRepository = new EFDbTestParentRepository();
private EFDbTestChildRepository testChildRepository = new EFDbTestChildRepository();
public ActionResult ListParents()
{
return View(testParentRepository.TestParents);
}
public ViewResult EditParent(int testParentID)
{
return View(testParentRepository.TestParents.First(tp => tp.TestParentID == testParentID));
}
[HttpPost]
public ActionResult EditParent(TestParent testParent)
{
if (ModelState.IsValid)
{
testParentRepository.SaveTestParent(testParent);
TempData["message"] = string.Format("Changes to test parents have been saved: {0} (ID = {1})",
testParent.Name,
testParent.TestParentID);
return RedirectToAction("ListParents");
}
// something wrong with the data values
return View(testParent);
}
}
So in the third action method which gets invoked when an HTTP POST arrives I used TestParent for model binding. This felt quite convenient because the browser page that generates the HTTP POST request contains input fields for all properties of TestParent. And I actually thought that's the way the templates that Visual Studio provides for CRUD operations work as well.
However the recommendation that I got was that the signature of the third action method should read public ActionResult EditParent(TestParentViewModel viewModel).
It sounds appealing at first, but as your models and view actions get increasingly complex, you start to see the value of using ViewModels for (most) everything, especially input scenarios.
Case 1 - Most web frameworks are susceptible to over-posting. If you are binding straight to your domain model, it is very possible to over-post data and maliciously change something not belonging to the user. I find it cleaner to bind to an input view model than have long string lists of white lists or black lists, although there are some other interesting ways with binding to an interface.
Case 2 - As your input grows in complexity, you'll run into times when you need to submit and validate fields not directly in the domain model ('I Agree' checkboxes, etc)
Case 3 - More of a personal thing, but I find model binding to relational domain objects to be a giant pain at times. Easier to link them up in AutoMapper than deal with MVC's modelbinder for complicated object graphs. MVC's html helpers also work more smoothly against primitive types than deep relational models.
The negatives of using ViewModels is that it isn't very DRY.
So the moral of the story is, binding to domain models can be a viable solution for simple things, but as the complexity increases, it becomes easier to have a separate view model and then map between the two.

Where to put restrictions on entities when separating Business layer from Data Layer

I am attempting to create the the business and data layers for my big ASP.NET MVC application. As this is the first time for me attempting a project of this scale I am reading some books and trying to take good care at separating things out properly. Usually my applications mix the business logic and data access layers, and multiple business entities are intertwined in the single class (which has confused me a few times when I was trying to figure out where to add things).
Most of what I have been reading is to separate out the business and data layers. This seems all fine and dandy, but I am having trouble visualizing exactly how to do this in some scenarios. For example, let's say I am creating a system that allows admins to add a new product to the system:
public class Product
{
public int Id { get; private set; }
public string Name { get; set; }
public decimal Price { get; set; }
}
Then I separate out the data access by creating a repository
public class ProductRepository
{
public bool Add(Product product);
}
Let's say I want to require a product's name to have at least 4 characters. I can't see how to do this cleanly.
One idea I had was to expand the Name's set property and only set it if it's 4 characters long. However, there is no way for a method that is creating the product to know the name didn't get set except that Product.Name != whatever they passed in.
Another idea I had is to put it in the Add() method in the repository, but then I have my business logic right there with the data logic, which also means if the Add call fails I don't know if it failed for the business logic or because the DAL failed (and it also means I can't test it using mock frameworks).
The only thing I can think of is to put my DAL stuff in a 3rd layer that gets called from the Add() method in the repository, but I don't see this in any of the domain modelling examples in my book or on the web (that I've seen at least). It also adds to the complexity of the domain models when I am not sure it is needed.
Another example is wanting to make sure that a Name is only used by one product. Would this go in the Product class, ProductRepository Add() method, or where?
As a side note, I plan to use NHibernate as my ORM however, to accomplish what I want it (theoretically) shouldn't matter what ORM I am using since TDD should be able to isolate it all.
Thanks in advance!
I usually approach this by using a layered architecture. How to do this? You basically have the following (ideally) VS projects:
Presentation layer (where the UI stuff resides)
Business layer (where the actual business logic resides)
Data access layer (where you communicate with your underlying DBMS)
For decoupling all of them I use so-called interface layers s.t. in the end I have
Presentation layer (where the UI
stuff resides)
IBusiness layer (containing the interfaces for the
business layer)
Business layer (where
the actual business logic resides)
IDataAccess layer (containing the
interfaces for the DAO layer)
Data access layer (where you communicate
with your underlying DBMS)
This is extremely handy and creates a nicely decoupled architecture. Basically your presentation layer just accesses the interfaces and not the implementations itself. For creating the according instances you should use a Factory or preferably some dependency injection library (Unity is good for .Net apps or alternatively Spring.Net).
How does this impact on your business logic / testability of your app?
It is probably too long to write everything in detail, but if you're concerned about having a well testable design you should absolutely consider dependency injection libraries.
Using NHibernate,...whatever ORM
Having a DAO layer completely separated through the interfaces from the other layers you can use whatever technology behind for accessing your underlying DB. You could directly issue SQL queries or use NHibernate, as you wish. The nice thing is that it is totally independent from the rest of your app. You could event start today by writing SQLs manually and tomorrow exchange your DAO dll with one that uses NHibernate without a single change in your BL or presentation layer.
Moreover testing your BL logic is simple. You may have a class like:
public class ProductsBl : IProductsBL
{
//this gets injected by some framework
public IProductsDao ProductsDao { get; set; }
public void SaveProduct(Product product)
{
//do validation against the product object and react appropriately
...
//persist it down if valid
ProductsDao.PersistProduct(product);
}
...
}
Now you can easily test the validation logic in your SaveProduct(...) method by mocking out the ProductDao in your test case.
Put things like the product name restriction in the domain object, Product, unless you want to allow products with fewer than 4 characters in some scenarios (in this case, you'd apply the 4-character rule at the level of the controller and/or client-side). Remember, your domain objects may be reused by other controllers, actions, internal methods, or even other applications if you share the library. Your validation should be appropriate to the abstraction you are modeling, regardless of application or use case.
Since you are using ASP .NET MVC, you should take advantage of the rich and highly extensible validation APIs included in the framework (search with keywords IDataErrorInfo MVC Validation Application Block DataAnnotations for more). There are lots of ways for the calling method to know that your domain object rejected an argument -- for example, throwing the ArgumentOutOfRangeException.
For the example of ensuring that product names are unique, you would absolutely not put that in Product class, because this requires knowledge of all other Products. This logically belongs at the persistence layer and optionally, the repository. Depending on your use case may warrant a separate service method that verifies that the name does not already exist, but you shouldn't assume that it will still be unique when you later try to persist it (it has to be checked again, because if you validate uniqueness and then keep it around a while longer before persisting, someone else could still persist a record with the same name).
This is the way I do it:
I keep the validation code in the entity class, which inherits some general Item Interface.
Interface Item {
bool Validate();
}
Then, in the repository's CRUD functions i call the appropriate Validate function.
This way all the logic paths are validating my values, but i need to look only in one place to see what that validation really is.
Plus, sometimes you use the entities outside the repository scope, for example in a View. So if the validation is separated, each action path can test for validation without asking the repository.
For restrictions I utilize the partial classes on the DAL and implement the data annotation validators. Quite often, that involves creating custom validators but that works great as it's completely flexible. I've been able to create very complex dependent validations that even hit the database as part of their validity checks.
http://www.asp.net/(S(ywiyuluxr3qb2dfva1z5lgeg))/learn/mvc/tutorial-39-cs.aspx
In keeping with the SRP (single responsibility principle), you might be better served if the validation is separate from the product's domain logic. Since it's required for data integrity, it should probably be closer to the repository - you just want to be sure that validation is always run without having to give it thought.
In this case you might have a generic interface (e.g. IValidationProvider<T>) that is wired to a concrete implementation through an IoC container or whatever your preference may be.
public abstract Repository<T> {
IValidationProvider<T> _validationProvider;
public ValidationResult Validate( T entity ) {
return _validationProvider.Validate( entity );
}
}
This way you can test your validation separately.
Your repository might look like this:
public ProductRepository : Repository<Product> {
// ...
public RepositoryActionResult Add( Product p ) {
var result = RepositoryResult.Success;
if( Validate( p ) == ValidationResult.Success ) {
// Do add..
return RepositoryActionResult.Success;
}
return RepositoryActionResult.Failure;
}
}
You could go a step further, if you intend on exposing this functionality via an external API, and add a service layer to mediate between the domain objects and the data access. In this case, you move the validation to the service layer and delegate data access to the repository. You may have, IProductService.Add( p ). But this can become a pain to maintain due to all of the thin layers.
My $0.02.
Another way to accomplish this with loose coupling would be to create validator classes for your entity types, and register them in your IoC, like so:
public interface ValidatorFor<EntityType>
{
IEnumerable<IDataErrorInfo> errors { get; }
bool IsValid(EntityType entity);
}
public class ProductValidator : ValidatorFor<Product>
{
List<IDataErrorInfo> _errors;
public IEnumerable<IDataErrorInfo> errors
{
get
{
foreach(IDataErrorInfo error in _errors)
yield return error;
}
}
void AddError(IDataErrorInfo error)
{
_errors.Add(error);
}
public ProductValidator()
{
_errors = new List<IDataErrorInfo>();
}
public bool IsValid(Product entity)
{
// validate that the name is at least 4 characters;
// if so, return true;
// if not, add the error with AddError() and return false
}
}
Now when it comes time to validate, ask your IoC for a ValidatorFor<Product> and call IsValid().
What happens when you need to change the validation logic, though? Well, you can create a new implementation of ValidatorFor<Product>, and register that in your IoC instead of the old one. If you are adding another criterion, however, you can use a decorator:
public class ProductNameMaxLengthValidatorDecorator : ValidatorFor<Person>
{
List<IDataErrorInfo> _errors;
public IEnumerable<IDataErrorInfo> errors
{
get
{
foreach(IDataErrorInfo error in _errors)
yield return error;
}
}
void AddError(IDataErrorInfo error)
{
if(!_errors.Contains(error)) _errors.Add(error);
}
ValidatorFor<Person> _inner;
public ProductNameMaxLengthValidatorDecorator(ValidatorFor<Person> validator)
{
_errors = new List<IDataErrorInfo>();
_inner = validator;
}
bool ExceedsMaxLength()
{
// validate that the name doesn't exceed the max length;
// if it does, return false
}
public bool IsValid(Product entity)
{
var inner_is_valid = _inner.IsValid();
var inner_errors = _inner.errors;
if(inner_errors.Count() > 0)
{
foreach(var error in inner_errors) AddError(error);
}
bool this_is_valid = ExceedsMaxLength();
if(!this_is_valid)
{
// add the appropriate error using AddError()
}
return inner_is_valid && this_is_valid;
}
}
Update your IoC configuration and you now have a minimum and maximum length validation without opening up any classes for modification. You can chain an arbitrary number of decorators in this way.
Alternatively, you can create many ValidatorFor<Product> implementations for the various properties, and then ask the IoC for all such implementations and run them in a loop.
Alright, here is my third answer, because there are so very many ways to skin this cat:
public class Product
{
... // normal Product stuff
IList<Action<string, Predicate<StaffInfoViewModel>>> _validations;
IList<string> _errors; // make sure to initialize
IEnumerable<string> Errors { get; }
public void AddValidation(Predicate<Product> test, string message)
{
_validations.Add(
(message,test) => { if(!test(this)) _errors.Add(message); };
}
public bool IsValid()
{
foreach(var validation in _validations)
{
validation();
}
return _errors.Count() == 0;
}
}
With this implementation, you are able to add an arbitrary number of validators to the object without hardcoding the logic into the domain entity. You really need to be using IoC or at least a basic factory for this to make sense, though.
Usage is like:
var product = new Product();
product.AddValidation(p => p.Name.Length >= 4 && p.Name.Length <=20, "Name must be between 4 and 20 characters.");
product.AddValidation(p => !p.Name.Contains("widget"), "Name must not include the word 'widget'.");
product.AddValidation(p => p.Price < 0, "Price must be nonnegative.");
product.AddValidation(p => p.Price > 1, "This is a dollar store, for crying out loud!");
U can use a other validation system. you can add a method to IService in service layer such as:
IEnumerable<IIssue> Validate(T entity)
{
if(entity.Id == null)
yield return new Issue("error message");
}

Resources