EF: Should entities have business logic? - asp.net-mvc-3

I was wondering, should the entities have the capability to save changes to the context? Or have business logic that relates to that particular entity? For example:
ActionResult ResetPassword(UserViewModel viewModel)
{
var user = userCollection.GetUser(viewModel.Username);
user.ResetPassword();
}
where:
class User : Entity
{
public string Password{ get; set; }
public ResetPassword()
{
Password = ""
context.SaveChanges();
}
}
I find this a bit weird since the entity would have a reference to the context. I am not sure whether this would work either - or whether this is recommended. But I want to work on a domain where I do not have to worry about saving changes, at a higher level etc. How can I accomplish this?
Thanks!
Update
I have updated my example - hope its a bit clearer now :)

According to Domain-Driven Design domain objects should have behavior.
You should definately read this book:

I would keep my Entities as POCO's(Plain Old Class Objects, classes with only properties) and have a Repositary do methods like Insert / Update.
I can keep my Entities in a separate class library and use it in different places ( even in a different project), with a different Repository implementation.
In this Tutorial It is nicely explained, how to do a Repositary & Unit Of Work Patterns on an MVC project which uses Entity Framework

You may want to consider the UnitOfWork pattern between your controller and your entities.
http://martinfowler.com/eaaCatalog/unitOfWork.html

Related

How and where to map EF POCOs to Business Objects

I have the projects
Domain.Model (contains code first POCOs)
Data.Context (contains the context & migrations only)
Data.Access (contains IGenericRepository & GenericRepository)
Service (contains BL service classes and UnitsOfWork)
Presentation.Admin (an Asp.Net Webforms web application)
I am using my POCOs as business objects across all the layers. I know there is some debate about this but this also fairly accepted.
So I have the Presentation calling Service > Getting a POCO via the Repository > Returning to Presentation and displaying, for example a HTML table and saving edits back to the DB - great.
Now I have a more complex page which I think requires a business object. This is a made-up by analogous example.
POCO
public class Book
{
BookId
string ExternalReference
}
public class Movie
{
int MovieId
string ExternalReference
}
Suggested Business Object
public MovieAdaptation
{
Book book;
Movie movie;
}
So ExternalReference is external and can not be a common foreign key in my database so therefore i cannot just do Book.Movie using a navigation property. I need to do a LINQ join (probably).
So my questions are:
1) Where should I define this business object. Currently it is just in the Service layer as only things that reference the service layer will use it.
2) Where should I construct this business object? Should it be in repositories which sit in Data.Access or further up?
3) How do I construct it using LINQ. Here is my best shot so far, but it seems pretty inefficient, especially if I am returning a list of these.
namespace MyProject.Services
{
public class AdaptationsService
{
AdaptationUnitOfWork _unitOfWork;
public AdaptationService
{
unitOfWork = new AdaptationUnitOfWork();
}
public Adaption GetAdaptations(string externalReference)
{
//Can anyone improve this maybe using LINQ join (as maybe it won't be getting books/movies by SingleOrDefault but by where
Book book= _unitOfWork.BookRepository.Get.SingleOrDefault(b=>b.ExternalReference==externalReference);
Movie movie= _unitOfWork.MovieRepository.Get.SingleOrDefault(m=>m.ExternalReference==externalReference);
Adaptation adaptation = new Adaptation();
adaptation.Book=book;
adaptation.Movie=movie;
}
}
}
1) Where should I define this business object. Currently it is just in the Service layer as only things that reference the service layer will use it.
I would probably keep it in the Service Layer. It's not relevant to your DAL as it's a combination of your POCO's, not the POCO's themselves. Have the Service layer construct/destruct it (see below).
2) Where should I construct this business object? Should it be in repositories which sit in Data.Access or further up?
Construct and destruct it in the Service Layer. The DAL should only send and receive your data access objects (your POCOs). Constructing and destructing business objects is not part of it's job description. For all intents and purposes whenever you use the term business object it should be above the DAL.
3) How do I construct it using LINQ. Here is my best shot so far, but it seems pretty inefficient, especially if I am returning a list of these.
I don't have a better answer than the example you gave. It sounds like you have to perform 2 queries then construct it yourself as you're doing.

how to synchronize data annotations between model and view models

I'm working with EF Code First, so my data annotations are driving my SQL server database columns definitions/properties (i.e., [StringLength(30)] = nvarchar(30), etc.). I'm using ViewModels to drive my views. How can I synchronize the data annotations between Models and ViewModels?
For example, I have the following entity class:
public class Ticket
{
...
[Required]
[DataType(DataType.Currency)]
[DisplayFormat(DataFormatString = "{0:C}")]
public double TicketBalance { get; set; }
...
}
And a ViewModel that uses the same property from the Model class:
public class EditTicketViewModel
{
...
[Required]
[DataType(DataType.Currency)]
[DisplayFormat(DataFormatString = "{0:C}")]
public double TicketBalance { get; set; }
...
}
How can I synchronize these two data annotations?
While you can't change the attributes on your ViewModels at runtime, you can to some degree emulate them for the purposes of validation (which is presumably the reason that you're using data annotations).
This requires creating the ViewModels using an object mapper like AutoMapper or EmitMapper. You can then hook into an appropriate part of the mapping process to update the DataAnnotationsModelMetadataProvider and DataAnnotationsModelValidatorProvider, which are used by MVC in the various parts of the validation process.
This answer shows a way of doing it with AutoMapper. I'm currently having some fun looking at a solution with EmitMapper, since that's somewhat faster to execute.
There is no synchronization between the two. While they may look similar, they actually are different: one is for the database, another is for the GUI.
For the database you mainly want to test for [Required] and [StringLength(XXX)]. Sometimes [DataType] as well.
For the GUI you want to check for those in addition of formatting, regular expressions, ranges etc.
There are validation attributes, display attributes, data modeling attributes. Choose the right attributes category at the right place according to the situation.
And it gets even worse when you start using things like jQuery validation or KnockoutJS validation. In that case you will have to duplicate your efforts third time for JS purposes. Unfortunately.
You can also check what other folks did here: How do I remain DRY with asp.net mvc view models & data annotation attributes?
There folks use inheritance. which is fine, but a bit confusing while you let others read your code later on.
The good advise is to switch from data annotations to fluent validation as per one of the responses in the link above. It will allow you to apply the same validation class to multiple models.
Hope this helps.

MVC3, RavenDB, AutoMapper, IoC/DI and Geese - where should I use my associated repositories?

I'm very new to RavenDB and MVC3, in particular the usage (not concept) of IoC. So just to warn you that this will sound like a very beginner question.
In summary:
I have a domain model, let's say it's
public class Goose
Within this class I might have a more complex object as a property
public Beak beak { get; set; }
In RavenDB we are rightly encouraged to [JsonIgnore] this property or not have it at all and instead have a reference identifier, like
public String beakId { get; set; }
Somwhere along the way in my MVC3 application I will want to view the Goose and I might want to display to the user, something about the Goose and it's Beak (should that be Bill?). So yeah I need a view model right?
public class GooseModel
{
public String BeakColour { get; set; }
public String BeakLength { get; set; }
...etc
}
Right, so assuming I have some GooseRepository and some BeakRepository here's the simple question....
I'm in the GooseController class and I'm loading a Goose to view. At what point do I use the BeakRepository and who should know about it? The GooseController knows about the GooseRepository and is loading the Goose by id. At this point we could have some property inside the Goose class which represents the whole Beak, but I don't really want to inject the BeakRepository into the GooseRepository do I? Ok, so perhaps when I create the GooseModel from the Goose I've found I could get the GooseModel properties for the BeakColour and BeakLength, how? Well I like AutoMapper, so perhapds my Map For the GooseModel from Goose is using the BeakRepository to find the Beak and then extract the two Beak properties to populate the GooseModel fields.. this too seems wrong... so what's left? The GooseController.. should the Goose controller know about the BeakRespository and then find and set the BeakColour and BeakLength!? that certainly seems completely wrong too..
So where does it get done? the Controller, the domain object, the mapper or somewhere else? Perhaps I should have a partial view of Type Beak which is used within the Goose view?..
I tend to consolidate this kind of logic into a service/business layer (GooseService) that i then inject into the controller. your service layer might take a GooseRepository and a BeakRepository, and return a resolved object that has mapped the GooseViewModel together.
Uhm,... reading your question I strongly suggest you forget about the Service-Layer and the Repository-Layer. If you don't have really good reasons to keep them (testing is not one of them since RavenDB has an EmbeddableDocumentStore, which is fast and easy) pull them in order to take advantage of some very nice features of RavenDB.
I've actually written a post about why I think you should generally avoid these layers:
http://daniellang.net/keep-your-code-simple/ It is about NHibernate, but concepts apply here as well.
Whether you should denormalize the BeakColor and BeakLength property into your Goose-document depends on your applications need. If you feel comfortable with the term "aggregate root", then the rule of thumb is that these generally are your documents. If you're not sure whether denormalization should be applied, avoid it, and use .Include(goose => goose.Beak) instead when loading your Goose.
Please let me know if that makes sense to you.

When using Data Annotations with MVC, Pro and Cons of using an interface vs. a MetadataType

If you read this article on Validation with the Data Annotation Validators, it shows that you can use the MetadataType attribute to add validation attributes to properties on partial classes. You use this when working with ORMs like LINQ to SQL, Entity Framework, or Subsonic. Then you can use the "automagic" client and server side validation. It plays very nicely with MVC.
However, a colleague of mine used an interface to accomplish exactly the same result. it looks almost exactly the same, and functionally accomplishes the same thing. So instead of doing this:
[MetadataType(typeof(MovieMetaData))]
public partial class Movie
{
}
public class MovieMetaData
{
[Required]
public object Title { get; set; }
[Required]
[StringLength(5)]
public object Director { get; set; }
[DisplayName("Date Released")]
[Required]
public object DateReleased { get; set; }
}
He did this:
public partial class Movie :IMovie
{
}
public interface IMovie
{
[Required]
object Title { get; set; }
[Required]
[StringLength(5)]
object Director { get; set; }
[DisplayName("Date Released")]
[Required]
object DateReleased { get; set; }
}
So my question is, when does this difference actually matter?
My thoughts are that interfaces tend to be more "reusable", and that making one for just a single class doesn't make that much sense. You could also argue that you could design your classes and interfaces in a way that allows you to use interfaces on multiple objects, but I feel like that is trying to fit your models into something else, when they should really stand on their own. What do you think?
I like your interface approach as it allows you to define a contract for your model which you can use to adapt your ORM generated classes to. That would allow you to decouple your app from the ORM framework and get more use out of the MetadataType interface as it serves as data validation metadata as well as a contract for your model. You could also decorate your interface with serialization attributes for use in WCF gaining more use out of the interface. I followed a few early blogs that recommended creating a metadata class but again I think the interface solution is a nice idea.
If those two options are the two I am presented with, I would personally probably choose the interface way, simply because I think it looks cleaner. But this is entirely based on personal taste - I don't know enough about the inner workings of .NET to say for sure, but I don't know any case where the actual functionality of the two approaches would differ.
On the other hand, a much better approach would be to use Data Transfer Objects (DTO's) for sending data back and forth, and have the validation requirements on them. That is, instead of requiring that the Movie object meet all the validation requirements, you require that a MovieInput object meets all those requirements, and then create code to map a correct MovieInput into a Movie. (If you don't want to do that manually, you could use AutoMapper or some other utility).
The concept is basically to have something like a View Model object on the way in just as well as on the way out - I could just as well have let MovieInput be called MovieViewModel and use it for transferring of data both in and out of the server.
I see no functional difference between the two approaches. I'm not sure reusability is really important here, given that validation will most often be on "one-off" ViewModels that probably won't get much, if any, reuse.

LINQ To SQL entity objects as domain objects

Clearly separation of concerns is a desirable trait in our code and the first obvious step most people take is to separate data access from presentation. In my situation, LINQ To SQL is being used within data access objects for the data access.
My question is, where should the use of the entity object stop? To clarify, I could pass the entity objects up to the domain layer but I feel as though an entity object is more than just a data object - it's like passing a bit of the DAL up to the next layer too.
Let's say I have a UserDAL class, should it expose an entity User object to the domain when a method GetByID() is called, or should it spit out a plain data object purely for storing the data and nothing more? (seems like wasteful duplication in this case)
What have you guys done in this same situation? Is there an alternative method to this?
Hope that wasn't too vague.
Thanks a lot,
Martin.
I return IQueryable of POCOs from my DAL (which uses LINQ2SQL), so no Linq entity object ever leaves the DAL. These POCOs are returned to the service and UI layers, and are also used to pass data back into the DAL for processing. Linq handles this very well:
IQueryable<MyObjects.Product> products = from p in linqDataContext.Products
select new MyObjects.Product //POCO
{
ProductID = p.ProductID
};
return products;
For most projects, we use LINQ to SQL entities as our business objects.
The LINQ to SQL designer allows you to control the accessibility of the classes and properties that it generates, so you can restrict access to anything that would allow the consumer to violate the business rules and provide suitable public alternatives (that respect the business rules) in partial classes.
There's even an article on implementing your business logic this way on the MSDN.
This saves you from writing a lot of tedious boilerplate code and you can even make your entities serialisable if you want to return them from a web service.
Whether or not you create a separate layer for the business logic really depends on the size of your project (with larger projects typically having greater variation between the business logic and data access layers).
I believe LINQ to Entities attempts to provide a one-stop solution to this conundrum by maintaining two separate models (a conceptual schema for your business logic and a storage schema for your data access).
I personally don't like my entities to spread accross the layers. My DAL return POCO's (of course, it often means extra work, but I found this much cleaner - maybe that this will be simpler in the next .NET version ;-)).
The question is not so simple and there are lots of different thinking of the subject (I keep on asking myself the same question that you are).
Maybe you could take a look at the MVC Storefront sample app : I like the essence of the concept (the mapping that occurs in the data layer especially).
Hope this helps.
There is a similar post here, however, I see your question is more about what you should do, rather than how you should do it.
In small applications I find a second POCO implementation to be wasteful, in larger applications (particularly those that implement web services) the POCO object (usually a Data Transfer Object) is useful.
If your app falls into the later case, you may want to look at ADO.Net Data Services.
Hope that helps!
I have actually struggled with this, as well. Using plain vanilla LINQ to SQL, I quickly abandoned the DBML tooling, because it bound the entities to tightly to the DAL. I was striving for a higher level of persistence ignorance, although Microsoft didn't make it very easy.
What I ended up doing was hand-writing the persistence ignorance layer, by having the DAL inherit from my POCOs. The inherited objects exposed the same properties of the POCO it is inheriting from, so while inside the persistence ignorance layer, I could use attributes to map to the objects. The called then could cast the inherited object back to its base type, or have the DAL do that for them. I preferred the latter case, because it lessened the amount of casting that needed to be done. Granted, this was a primarily read-only implementation, so I would have to revisit it for more complex update scenarios.
The amount of manual coding for this is rather large, because I also have to manually maintain (after coding, to begin with) the context and provider for each data source, on top of the object inheritance and mappings. If this project was being deprecated, I would definitely move to a more robust solution.
Looking forward to the Entity Framework, persistence ignorance is a commonly requested feature according to the design blogs for the EF team. In the meantime, if you decide to go the EF route, you could always look at a pre-rolled persistence ignorance tool, like the EFPocoAdapter project on MSDN, to help.
I use a custom LinqToSQL generator, built upon one I found in the Internet, in place of the default MSLinqToSQLGenerator.
To make my upper layers independent of such Linq objects, I create interfaces to represent each one of them and then use such interfaces in these layers.
Example:
public interface IConcept {
long Code { get; set; }
string Name { get; set; }
bool IsDefault { get; set; }
}
public partial class Concept : IConcept { }
[Table(Name="dbo.Concepts")]
public partial class Concept
{
private long _Code;
private string _Name;
private bool _IsDefault;
partial void OnCreated();
public Concept() { OnCreated(); }
[Column(Storage="_Code", DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true)]
public long Code
{
//***
}
[Column(Storage="_Name", DbType="VarChar(50) NOT NULL")]
public string Name
{
//***
}
[Column(Storage="_IsDefault", DbType="Bit NOT NULL")]
public bool IsDefault
{
//***
}
}
Of course there is much more than this, but that's the idea.
Please keep in mind that Linq to SQL is not a forward looking technology. It was released, it's fun to play with, but Microsoft is not taking it anywhere. I have a feeling it won't be supported forever either. Take a look at the Entity Framework (EF) by Microsoft which incorporates some of the Linq to SQL goodness.

Resources