I will start to code a new Web application soon. The application will be built using ASP.Net MVC 3 and Entity Framework 4.1 (Database First approach). Instead of using the default EntityObject classes, I will create POCO classes using the ADO.NET POCO Entity Generator.
When I create POCOs using this tool, it automatically adds the Virtual keyword to all properties for change tracking and navigation properties for lazy loading.
I have however read and seen from demonstrations, that Julie Lerman (EF Guru!) seems to turn off lazy loading and also modifies her POCO template so that the Virtual keyword is removed from her POCO classes. Julie states the reason why she does this is because she is writing applications for WCF services and using the Virtual keyword with this causes a Serialization issue. She says, as an object is getting serialized, the serializer is touching the navigation properties which then triggers lazy loading, and before you know it you are pulling the whole database across the wire.
I think Julie was perhaps exagarating when she said this could pull the whole database across the wire, however, even so, this thought scares me!
My question is (finally), should I also remove the Virtual keyword from my POCO classes for my MVC application and use DectectChanges for my change tracking and Eager Loading to request navigation properties.
Your help with this would be greatly appreciated.
Thanks as ever.
Serialization can indeed trigger lazy loading because the getter of the navigation property doesn't have a way to detect if the caller is the serializer or user code.
This is not the only issue: whether you have virtual navigation properties or all properties as virtual EF will create a proxy type at runtime for your entities, therefore entity instances the serializer will have to deal with at runtime will typically be of a type different from the one you defined.
Julie's recommendations are the simplest and most reasonable way to deal with the issues, but if you still want to work with the capabilities of proxies most of the time and only sometimes serialize them with WCF, there are other workarounds available:
You can use a DataContractResolver to map the proxy types to be serialized as the original types
You can also turn off lazy loading only when you are about to serialize a graph
More details are contained in this blog post: http://blogs.msdn.com/b/adonet/archive/2010/01/05/poco-proxies-part-2-serializing-poco-proxies.aspx
Besides this, my recommendation would be that you use the DbContext template and not the POCO template. DbContext is the new API we released as part of EF 4.1 with the goal of providing greater productivity. It has several advantages like the fact that it will automatically perform DetectChanges so that you won't need in general to care about calling the method yourself. Also the POCO entities we generate for DbContext are simpler than the ones that we generate with the POCO templates. You should be able to find lots of MVC exampels using DbContext.
Well it depends on your need, if you are going to serialize your POCO classes than yes you should remove them (For example: when using WCF services or basically anything that will serialize your entire object). But if you are just building a web app that needs to access your classes than I would leave them in your classes as you control the objects that you will access in your classes through your code.
Related
I have a project in which I am using NHibernate and ASP.Net MVC. The application is intended to allow users to track certain data and then produce views of statistics based upon the data entered. The structure of my application thus far looks something like this:
NHibernate Layer: Contains Repository<T> and UnitOfWork classes, as well as entity mapping definitions.
Core/Service Layer: Contains generic EntityService class. At the moment, this simply defines transaction scope via IUnitOfWork and interfaces with IRepository to provide higher-level data access services.
Presentation Layer (MVC Application): Not yet implemented, but contains the usual stuff plus dependency injection.
I have a couple of questions:
Is it poor design to allow my MVC application to handle dependency injection for ALL layers? For example, as well as dependency injection of EntityService instances into controllers, it will handle the dependency injection of IRepository into the EntityService classes. Should the service layer handle this itself, even though this would mean performing dependency injection in two distinct places?
Where should I produce my statistics? This business logic doesn't seem to belong in my service layer, which, at present, only contains entity type definitions and an interface for modifying and accessing entity properties. I have a few thoughts on this, but I'm not sure which I like best:
Keep my service layer as is and create a separate Statistics project - this is completely independent of the entity types for which it will be used, meaning my MVC controllers will have to pass raw numerical information between my business entities and my (presumably static) statistics classes. This is quite a neat separation but potentially means a lot of business logic still remaining in the presentation layer.
Create a Statistics project; however, create a tight coupling between the classes in this project and my business entities. For example, instead of passing a Reading object's values into a method, I will pass the entire object (or define them as extension methods). This will shift business logic out of my MVC app but the tight coupling seems a bit messy.
Keep all of my business logic inside my service layer. Define strongly-typed subclasses of EntityService, so my services contain both entity-specific business methods and data storage methods, while keeping the entity classes themselves as pure data containers. Create a separate Statistics project for any generic statistical processing and call its methods via my derived service classes. My service classes effectively merge business functions with the storage functionality provided created by IRepository<T>.
I am erring toward the third option but does anyone have any thoughts? Alternative suggestions?
Thanks in advance!
Preliminary observation:
I like the way in which you described your project, I just didn't get why your Data Access Layer (DAL) is called NHibernate Layer: it is odd with all the rest in which you didn't use technology name to describe a logical layer (correctly). So I suggest you to rename it DAL, and use it to abstract your app from NHibernate.
My opinions about your questions:
Absolutely no. It is good to apply Dependency Injection to All Layers. A couple or reasons for which it is good:
1.1 Testing: you can mock DAL interfaces and do unit test Service Layer w/o DAL using another DI config file. In the same way you can mock Service for Web Controllers layer and so on.
1.2 Different DAL implementations: suppose you need different DAL implementation (NOSQL, SQL or LINQ instead of NHibernate, etc..) technologies for different deployment of you project or to scale in the future. You can do that easily maintaining different DI config files.
You can have the same layer deployed in different projects. In the same way you can have a project containing different layers. I think their relation is orthogonal: project is describing a physical (development time and run time) implementation. Layers are logical. So initially I would keep it simple with the third option.
I just don't understand why you saying the following regarding this option:
Create a separate Statistics project for any generic statistical
processing and call its methods via my derived service classes. My
service classes effectively merge business functions with the storage
functionality provided created by IRepository.
I see Statistics as one or more services so you can implement it as namespace with classes inside your Service Layer. And, as any other service, you can inject DAL Repository classes. And, as any other Service/DAL, the Model classes can be shared between different Services and DAL classes.
StatsService.AverageReadingFor(Person p, DateTime start, DateTime end) sounds good.
There are several implementation options:
Using underlying repository features (for example: SQL avg function)
Using Observer Pattern which is implementable also using Dependency Injection
Using Aspect Oriented Programming. See that Spring.Net chapter as an example.
If you have more than one Service Layer instance (more than one server) than 2 and 3 must be adapted for out of process communication using a messaging system.
Just an update - Regarding my second question, I have decided to define an IStatsService<T> which expects an IEntityService<T> to be passed into its constructor. I'll use this for generic statistical processing of business entities and create further interfaces that implement IStatsService<T> where I need more type-specific information.
Hopefully this will help someone who has been scratching their head about a similar problem!
(I had a hard time titling the question so feel free to suggest edits)
Here's the situation: we have just started building a system which is comprised of two integrated MVC 3 web applications running on Azure with a shared AzureSQL database. There are many reasons for running two apps instead of one and I'd rather not get into that...
Originally, database was created code-first from the MVC application "A". 75% of entities from all created will be relevant to application "B" plus application "B" will need a few entities specific to it.
Currently, the entities-defining classes have been extracted into a class library so within the application "A" solution to allow for reuse in application "B". But I am still unsure how to go about adding entities required for application "B"...
The question is: what is the best way to manage the database development/management in this situation? Specifically, where should the definition of entities be? Should we just have a separate db project defining the database and work db-first? (with this option being my preferred at this stage).
Since both of the devs (me and the other dev) working on this are new to MVC and EF, any advice would be much appreciated.
Without seeing what you have its not entirely mapping here in my brain - but I think I may have an idea on this.
Can you create an additional projects containing your models (data access layer) that has your entity framework edmx (or code first) and poco templates installed. This project will be shared by both applications - ie both projects get this assembly and both have the ef connect string in their web.configs.
Another approach is to put all code first into a single project (whatever.domain, whatever.models) etc. Your mapping code then goes into your DataAccess project
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove();
modelBuilder.Configurations.Add(new CustomerMap());
...
}
You now have shared poco classes and a single data access layer.
Some treat their poco classes as their domain objects (theres no problem with this) and their business logic goes in the poco classes. This is fine as long as your poco objects themselves remain persistent ignorant and ideally you don't want to reference implementation specific components in your poco classes. For a good writeup here see:
POCO - if POCO means pure .net class with only properties, where i can write validations in MVC
Personally I like db first and then reverse engineer it using the EF power tools to have a code first model as if you ever want to integration test it, you can simply create the db for your integration tests and remove it when done.
can anyone help me with converting my project to use PetaPoco?
here is my issue. backend is SQL 2010 database .NET fraimework 4.0
I have an existing 3-tier win app in C# that uses a custom DAL -- each Data call uses stored procs with parameters and either returns dataset or specific value as needed -- each call accepts dataset referenced parameter and baseClass parameter (base class is identical to DB table schema well mostly)
I want to replace my custom DAL with PetaPoco but keep the 3-tier layout
the app is relying on predefined base classes as DTO to pass info between UI-BAL-DAL
does anyone have a sample/example of app solution layout as to how to use PetaPoco in 3-tier enviroment code example would be very helpfull
thanks in advance...
Vlad
Example not really needed
All you have to do is get acquainted with PetaPoco library. The best way is its documentation. It's not a complicated/complex library, so you should get up to speed with it quite quickly.
If you also have you application broken down into projects for each layer (UI, BL, DAL), then the easiest thing to do is to create a new DAL project and implement all used functionality of existing DAL but use PetaPoco in this one. Then just change your project references and voila. That's it. You can keep your POCOs/DAO. If you've used IoC then it will be even easier because instantiating DAL repositories (or whatever you're using) is probably done via some DI container.
Layering and PetaPoco
PetaPoco has nothing to do with application layering. If you use it in 3-tier applicatin that's fine.
What are you using now?
You didn't mention which DAL library (if any) you're using right now. If you don't, then using PetaPoco will result in less lines of code and much simplified object mapping.
I am using MVC3 for my application and I have a question about validation. I have a Business Logic layer that is separate from my web layer where I will have a function like CreateUser, which creates a new user for the application to use. I want this function to be accessible in two places: 1) Somewhere in a controller that makes use of it and 2) in a "Setup Data" program that inserts data into the system.
I want to make use of things like ModelState.IsValid to check for all basic validation, but this won't help me for my Setup Data mode (or any other mode that doesn't go through MVC). Is there any way I can still leverage this code, but to contain all validation in my BusinessLogic layer instead of in the controller without having the BusinessLogic layer rely on MVC?
Thanks.
It looks like this article about Service Layers has what I need. Other suggestions are still welcome. Thanks.
Note that the article on service layers still means that you need a dependency on the MVC assembly. After wrestling a bit with this myself recently, I'm now of the opinion that keeping things as separate as possible is a good design. In my model assembly, I have a services folder wherein from, say, a Create() routine, I validate and throw custom exceptions.
The service layer doesn't care who or how these exceptions are consumed. In your MVC app, map them into model state errors collections or whatever. Your design is all the more solid because your model assembly doesn't depend on some validation runner making appropriate use of MVC validation attributes, collections, etc.
I also noticed the article mentions a repository. I know it's all the rage these days but if you're already using an ORM like Entity Framework, a repository is really just a DAO. Reposity is the new Singleton.
I've been using ASP.net MVC for about two years now and I'm still learning the best way to structure an application.
I wanted to throw out these ideas that I've gathered and see if they are "acceptable" ways in the community to design MVC applications.
Here is my basic layout:
DataAccess Project - Contains all repository classes, LINQ-to-SQL data contexts, Filters, and custom business objects for non-MS SQL db repositories (that LINQ-to-SQL doesn't create). The repositories typically only have basic CRUD for the object they're managing.
Service Project - Contains service classes that perform business logic. They take orders from the Controllers and tell the repositories what to do.
UI Project - Contains view models and some wrappers around things like the ConfigurationManager (for unit testing).
Main MVC Project - Contains controllers and views, along with javascript and css.
Does this seem like a good way to structure ASP.NET MVC 2 applications? Any other ideas or suggestions?
Are view models used for all output to views and input from views?
I'm leaning down the path of making view models for each business object that needs to display data in the view and making them basic classes with a bunch of properties that are all strings. This makes dealing with the views pretty easy. The service layer then needs to manage mapping properties from the view model to the business object. This is a source of some of my confusion because most of the examples I've seen on MVC/MVC2 do not use a view model unless you need something like a combo box.
If you use MVC 2's new model validation, would you then validate the viewmodel object and not have to worry about putting the validation attributes on the business objects?
How do you unit test this type of validation or should I not unit test that validation messages are returned?
Thanks!
Interesting.
One thing I do differently is that I split off my DataAccess project from my Domain project. The domain project still contains all the interfaces for my repositories but my DataAccess project contains all the concrete implementations of them.
You don't want stuff like DataContext leaking into your domain project. Following the onion architecture your domain shouldn't have any dependencies on external infrastructure... I would consider DataAccess to have that because it's directly tied to a database.
Splitting them off means that my domain doesn't have a dependency on any ORM or database, so I can swap them out easily if need be.
Cheers,
Charles
Ps. What does your project dependency look like? I've been wondering where to put my ViewModels. Maybe a separate UI project is a good idea, but I'm not entirely sure how that would work. How do they flow through the different project tiers of your application?