Best practice for persisting database-stored lookup data at app level in MVC - asp.net-mvc-3

Slogging through MVC+EF and trying to focus on doing things the right way. Right now I'm looking to add a dropdown to a form but I'd like to avoid hitting the database every time the page loads so I'd like to store the data in the app level. I figure creating an application level variable isn't the best approach. I've read about using the cache and static utility functions but surprisingly, nothing has sounded terribly definitive. (Static classes bad for unit testing, caching bad
So I have two scenarios that I'm curious about, I'm not sure if the approach would differ between the two.
1) A basic lookup, let's say the fifty states. Small, defined, will never change. Load at application startup. (Not looking for a hard coded solution but retrieval from the database.)
2) A lookup that will very rarely change and only via an admin-like screen. Let's say, cities/stores where your product is being sold. So data would be stored
in the model but would be relatively static unless someone made changes via the application. So not looking to hit the database every time I need to populate a dropdown/listbox.
Seems like basic stuff but it's basically the same as this topic that was never answered:
Is it good to use a static EF object context in an MVC application for better perf?
Any help is appreciated.

I will address you question in a few parts. First off, is it inherently bad to use static variables or caching patterns in MVC. The answer is simply no. As long as your architecture supports them it is OK. Just put your cache in the right place and design for testability as I will explain later.
The second part is what is the "right" way to have this type of persisted data stored so you don't have to make round trips to the DB to populate common UI items. For this, I don't recommend storing EF objects. I would create POCO objects (View models or similar) that you cache. So in the example of your 50 states you might have something like this:
public class State
{
public string Abbreviation { get; set; }
public string Name { get; set; }
}
Then you would do something like this to create your cached list:
List<State> states = Context.StateData.Select(s => new State { Abbreviation = s.Abbreviation, Name = s.Name}).ToList();
Finally, whatever your caching solution is, it should implement an interface so you can mock that caching method for testing.
To do this without running into circular references or using reflection, you will need at least 3 assemblies:
Your MVC application
A class library to define your POCO objects and interfaces
A class library do perform your data access and caching (this can obviously be split into 2 libraries if that makes it easier to maintain and/or test)
That way you could have something like this in your MVC code:
ICache myCache = CacheFactory.CreateCache();
List<State> states = myCache.ListStates();
// populate your view model with states
Where ICache and State are in one library and your actual implementation of ICache is in another.
This is what I do for my standard architecture: splitting POCO objects and interfacees which are data access agnostic into a separate library from data access which is the separate from my MVC app.

Look into using a Dependency Injection tool such as unity, ninject, structuremap, etc. These will allow for the application level control you are looking for by implementing a kernel which holds on to objects in a very similar way to what you seem to be describing.

Related

Trying to improve efficiency of MVC3 + Unity project

I have an MVC3 project that uses Unity for dependency injection.
There is a main MVC3 project, a “domain” class library that sits between MVC3 and the data tier, and a bunch of class libraries that make up the data tier.
(MVC3) – (domain) – (data tier)
This is an example of one of the service constructors in the domain class:
public DomainModelCacheServices(
Data.Interface.ICountryRepository countryRepository,
Data.Interface.ILanguageRepository languageRepository,
Data.Interface.ISocialNetRepository socialNetRepository
)
Every time a controller is called that has DomainModelCacheServices in its constructor, a new DomainModelCacheServices object is constructed, plus the three repository classes in the constructor of DomainModelCacheServices.
I cannot believe this is efficient!
What makes this worse is that the class DomainModelCacheServices is a cache class. It loads lists of data that never change, and holds them as statics. But it still needs to construct three repository classes for every reference!
If I give DomainModelCacheServices the lifetime of a singleton (forever), I have to ensure it is thread-safe, and if the day comes when I am getting hundreds of hits, there’s going to be a lot of locking.
I could change the constructor to this:
public DomainModelCacheServices(
IServiceLocator serviceLocator
)
I don’t know why, but this doesn’t look right. The constructor becomes meaningless to the eye, and I have to reference Unity in the domain class and somehow make the domain class aware of the ServiceLocator owned by the MVC3 application. Maybe the loose-coupling can be too loose?
Maybe constructing all these classes is not as inefficient as it looks I shouldn’t worry about it?
What would be nice is if Unity supported “Lazy” constructor parameters. But it doesn’t.
So, any ideas on how to make an MVC3 + Unity project more efficient, specifically in the domain model design?
Thanks for reading!
The cache shouldn't be definied on the domain level but on the repositories implemntation level (so in DAL). So for example ICountryRepository should have two implementations in DAL : CountryRepository and ChachedCountryRepository. These should be wired as decorators in Unity (CountryRepository is inside the ChachedCountryRepository). CachedCountryRepository would check if the data is in the cache and if not it would pass the call to the inner CountryRepository.
Creating objects is not expensive and wouldn't care too much about issues as a caching is correctly definied.
Great reasoning.
However, creating objects is cheap. I would not create a singleton since you already are caching all objects in static fields. The current approach is easy to understand.
I got another question for you:
Why are you not caching in your repository classes?
The repositories are responsible for the data and all data handling should be transparent to everything else. It also makes everything easier since they are responsible of updating the data sources. How do you keep the cache in sync with changes today? Through domain events?
I would create a cache class which I would use as a private field in the repository.

Business Layer structure, how do you build yours?

I am a big fan of NTiers for my development choices, of course it doesnt fit every scenario.
I am currently working on a new project and I am trying to have a play with the way I normally work, and trying to see if I can clean it up. As I have been a very bad boy and have been putting too much code in the presentation layer.
My normal business layer structure is this (basic view of it):
Business
Services
FooComponent
FooHelpers
FooWorkflows
BahComponent
BahHelpers
BahWorkflows
Utilities
Common
ExceptionHandlers
Importers
etc...
Now with the above I have great access to directly save a Foo object and a Bah object, via their respective helpers.
The XXXHelpers give me access to Save, Edit and Load the respective objects, but where do I put the logic to save objects with child objects.
For example:
We have the below objects (not very good objects I know)
Employee
EmployeeDetails
EmployeeMembership
EmployeeProfile
Currently I would build these all up in the presentation layer and then pass them to their Helpers, I feel this is wrong, I think the data should be passed to a single point above presentation in the business layer some place and sorted out there.
But I'm at a bit of a loss as to where I would put this logic and what to call the sector, would it go under Utilities as EmployeeManager or something like this?
What would you do? and I know this is all preference.
A more detailed layout
The workflows contain all the calls directly to the DataRepository for example:
public ObjectNameGetById(Guid id)
{
return DataRepository.ObjectNameProvider.GetById(id);
}
And then the helpers provider access to the workflows:
public ObjectName GetById(Guid id)
{
return loadWorkflow.GetById(id);
}
This is to cut down on duplicate code, as you can have one call in the workflow to getBySomeProperty
and then several calls in the Helper which could do other operations and return the data in different ways, a bad example would be public GetByIdAsc and GetByIdDesc
By seperating the calls to the Data Model by using the DataRepository, it means that it would be possible to swap out the model for another instance (that was the thinking) but ProviderHelper has not been broken down so it is not interchangable, as it is hardcode to EF unfortunately.
I dont intend to change the access technology, but in the future there might be something better or just something that all the cool kids are now using that I might want to implement instead.
projectName.Core
projectName.Business
- Interfaces
- IDeleteWorkflows.cs
- ILoadWorkflows.cs
- ISaveWorkflows.cs
- IServiceHelper.cs
- IServiceViewHelper.cs
- Services
- ObjectNameComponent
- Helpers
- ObjectNameHelper.cs
- Workflows
- DeleteObjectNameWorkflow.cs
- LoadObjectNameWorkflow.cs
- SaveObjectNameWorkflow.cs
- Utilities
- Common
- SettingsManager.cs
- JavascriptManager.cs
- XmlHelper.cs
- others...
- ExceptionHandlers
- ExceptionManager.cs
- ExceptionManagerFactory.cs
- ExceptionNotifier.cs
projectName.Data
- Bases
- ObjectNameProviderBase.cs
- Helpers
- ProviderHelper.cs
- Interfaces
- IProviderBase.cs
- DataRepository.cs
projectName.Data.Model
- Database.edmx
projectName.Entities (Entities that represent the DB tables are created by EF in .Data.Model, this is for others that I may need that are not related to the database)
- Helpers
- EnumHelper.cs
projectName.Presenation
(depends what the call of the application is)
projectName.web
projectName.mvc
projectName.admin
The test Projects
projectName.Business.Tests
projectName.Data.Test
+1 for an interesting question.
So, the problem you describe is pretty common - I'd take a different approach - first with the logical tiers and secondly with the utility and helper namespaces, which I'd try and factor out completely - I'll tell you why in a second.
But first, my preferred approach here is pretty common enterprise architecture which I'll try to highlight in brief, but there's much more depth out there. It does require some radical changes in thinking - using NHibernate or Entity framework to allow you to query your object model directly and let the ORM deal with things like mapping to and from the database and lazy loading relationships etc. Doing this will allow you to implement all of your business logic within a domain model.
First the tiers (or projects in your solution);
YourApplication.Domain
The domain model - the objects representing your problem space. These are plain old CLR objects with all of your key business logic. This is where your example objects would live, and their relationships would be represented as collections. There is nothing in this layer that deals with persistence etc, it's just objects.
YourApplication.Data
Repository classes - these are classes that deal with getting the aggregate root(s) of your domain model.
For instance, it's unlikely in your sample classes that you would want to look at EmployeeDetails without also looking at Employee (an assumption I know, but you get the gist - invoice lines is a better example, you generally will get to invoice lines via an invoice rather than loading them independently). As such, the repository classes, of which you have one class per aggregate root will be responsible for getting initial entities out of the database using the ORM in question, implementing any query strategies (like paging or sorting) and returning the aggregate root to the consumer. The repository would consume the current active data context (ISession in NHibernate) - how this session is created depends on what type of app you are building.
YourApplication.Workflow
Could also be called YourApplication.Services, but this can be confused with web services
This tier is all about interrelated, complex atomic operations - rather than have a bunch of things to be called in your presentation tier, and therefore increase coupling, you can wrap such operations into workflows or services.
It's possible you could do without this in many applications.
Other tiers then depend on your architecture and the application you're implementing.
YourApplication.YourChosenPresentationTier
If you're using web services to distribute your tiers, then you would create DTO contracts that represent just the data you are exposing between the domain and the consumers. You would define assemblers that would know how to move data in and out of these contracts from the domain (you would never send domain objects over the wire!)
In this situation, and you're also creating the client, you would consume the operation and data contracts defined above in your presentation tier, probably binding to the DTOs directly as each DTO should be view specific.
If you have no need to distribute your tiers, remembering the first rule of distributed architectures is don't distribute, then you would consume the workflow/services and repositories directly from within asp.net, mvc, wpf, winforms etc.
That just leaves where the data contexts are established. In a web application, each request is usually pretty self contained, so a request scoped context is best. That means that the context and connection is established at the start of the request and disposed at the end. It's trivial to get your chosen IoC/dependency injection framework to configure per-request components for you.
In a desktop app, WPF or winforms, you would have a context per form. This ensures that edits to domain entities in an edit dialog that update the model but don't make it to the database (eg: Cancel was selected) don't interfere with other contexts or worse end up being accidentally persisted.
Dependency injection
All of the above would be defined as interfaces first, with concrete implementations realised through an IoC and dependency injection framework (my preference is castle windsor). This allows you to isolate, mock and unit test individual tiers independently and in a large application, dependency injection is a life saver!
Those namespaces
Finally, the reason I'd lose the helpers namespace is, in the model above, you don't need them, but also, like utility namespaces they give lazy developers an excuse not to think about where a piece of code logically sits. MyApp.Helpers.* and MyApp.Utility.* just means that if I have some code, say an exception handler that maybe logically belongs within MyApp.Data.Repositories.Customers (maybe it's a customer ref is not unique exception), a lazy developer can just place it in MyApp.Utility.CustomerRefNotUniqueException without really having to think.
If you have common framework type code that you need to wrap up, add a MyApp.Framework project and relevant namespaces. If your're adding a new model binder, put it in MyApp.Framework.Mvc, if it's common logging functionality, put it in MyApp.Framework.Logging and so on. In most cases, there shouldn't be any need to introduce a utility or helpers namespace.
Wrap up
So that scratches the surface - hope it's of some help. This is how I'm developing software today, and I've intentionally tried to be brief - if I can elaborate on any specifics, let me know. The final thing to say on this opinionated piece is the above is for reasonably large scale development - if you're writing notepad version 2 or a corporate phone book, the above is probably total overkill!!!
Cheers
Tony
There is a nice diagram and a description on this page about the application layout, alhtough looks further down the article the application isnt split into physical layers (seperate project) - Entity Framework POCO Repository

What exactly is the model in MVC

I'm slightly confused about what exactly the Model is limited to. I understand that it works with data from a database and such. Can it be used for anything else though? Take for example an authentication system that sends out an activation email to a user when they register. Where would be the most suitable place to put the code for the email? Would a model be appropriate... or is it better put in a view, controller, etc?
Think of it like this. You're designing your application, and you know according to the roadmap that version 1 will have nothing but a text based command line interface. version 2 will have a web based interface, and version 3 will use some kind of gui api, such as the windows api, or cocoa, or some kind of cross platform toolkit. It doesn't matter.
The program will probably have to go across to different platforms too, so they will have different email subsystems they will need to work with.
The model is the portion of the program that does not change across these different versions. It forms the logical core that does the actual work of whatever special thing that the program does.
You can think of the controller as a message translator. it has interfaces on two sides, one faces towards the model, and one faces towards the view. When you make your different versions, the main activity will be rewriting the view, and altering one side of the controller to interface with the view.
You can put other platform/version specific things into the controller as well.
In essense, the job of the controller is to help you decouple the domain logic that's in the model, from whatever platform specific junk you dump into the view, or in other modules.
So to figure out whether something goes in the model or not, ask yourself the question "If I had to rewrite this application to work on platform X, would I have to rewrite that part?" If the answer is yes, keep it out of the model. If the answer is no, it may go into the model, if it's part of the essential logic of the program.
This answer might not be orthodox, but it's the only way I've ever found to think of the MVC paradigm that doesn't make my brain melt out of my ear from the meaningless theoretical mumbo jumbo that discussions about MVC are so full of.
Great question. I've asked this same question many times in my early MVC days. It's a difficult question to answer succintly, but I'll do my best.
The model does generally represent the "data" of your application. This does not limit you to a database however. Your data could be an XML file, a web resource, or many other things. The model is what encapsulates and provides access to this data. In an OOP language, this is typically represented as an object, or a collection of objects.
I'll use the following simple example throughout this answer, I will refer to this type of object as an Entity:
<?php
class Person
{
protected $_id;
protected $_firstName;
protected $_lastName;
protected $_phoneNumber;
}
In the simplest of applications, say a phone book application, this Entity would represent a Person in the phone book. Your View/Controller (VC) code would use this Entity, and collections of these Entities to represent entries in your phone book. You may be wondering, "OK. So, how do I go about creating/populating these Entities?". A common MVC newbie mistake is to simply start writing data access logic directly in their controller methods to create, read, update, and delete (CRUD) these. This is rarely a good idea. The CRUD responsibilities for these Entities should reside in your Model. I must stress though: the Model is not just a representation of your data. All of the CRUD duties are part of your Model layer.
Data Access Logic
Two of the simpler patterns used to handle the CRUD are Table Data Gateway and Row Data Gateway. One common practice, which is generally "not a good idea", is to simply have your Entity objects extend your TDG or RDG directly. In simple cases, this works fine, but it bloats your Entities with unnecessary code that has nothing to do with the business logic of your application.
Another pattern, Active Record, puts all of this data access logic in the Entity by design. This is very convenient, and can help immensely with rapid development. This pattern is used extensively in Ruby on Rails.
My personal pattern of choice, and the most complex, is the Data Mapper. This provides a strict separation of data access logic and Entities. This makes for lean business-logic exclusive Entities. It's common for a Data Mapper implementation to use a TDG,RDG, or even Active Record pattern to provide the data access logic for the mapper object. It's a very good idea to implement an Identity Map to be used by your Data Mapper, to reduce the number of queries you are doing to your storage medium.
Domain Model
The Domain Model is an object model of your domain that incorporates behavior and data. In our simple phone book application this would be a very boring single Person class. We might need to add more objects to our domain though, such as Employer or Address Entities. These would become part of the Domain Model.
The Domain Model is perfect for pairing with the Data Mapper pattern. Your Controllers would simply use the Mapper(s) to CRUD the Entities needed by the View. This keeps your Controllers, Views, and Entities completely agnostic to the storage medium. This also allows for differing Mappers for the same Entity. For example, you could have a Person_Db_Mapper object and a Person_Xml_Mapper object; the Person_Db_Mapper would use your local DB as a data source to build Entities, and Person_Xml_Mapper could use an XML file that someone uploaded, or that you fetched with a remote SOAP/XML-RPC call.
Service Layer
The Service Layer pattern defines an application's boundary with a layer of services that establishes a set of available operations and coordinates the application's response in each operation. I think of it as an API to my Domain Model.
When using the Service Layer pattern, you're encapsulating the data access pattern (Active Record, TDG, RDG, Data Mapper) and the Domain Model into a convenient single access point. This Service Layer is used directly by your Controllers, and if well-implemented provides a convenient place to hook in other API interfaces such as XML-RPC/SOAP.
The Service Layer is also the appropriate place to put application logic. If you're wondering what the difference between application and business logic is, I will explain.
Business logic is your domain logic, the logic and behaviors required by your Domain Model to appropriately represent the domain. Here are some business logic examples:
Every Person must have an Address
No Person can have a phone number longer than 10 digits
When deleting a Person their Address should be deleted
Application logic is the logic that doesn't fit inside your Domain. It's typically things your application requires that don't make sense to put in the business logic. Some examples:
When a Person is deleted email the system administrator
Only show a maximum of 5 Persons per page
It doesn't make sense to add the logic to send email to our Domain Model. We'd end up coupling our Domain Model to whatever mailing class we're using. Nor would we want to limit our Data Mapper to fetch only 5 records at a time. Having this logic in the Service Layer allows our potentially different APIs to have their own logic. e.g. Web may only fetch 5, but XML-RPC may fetch 100.
In closing, a Service ayer is not always needed, and can be overkill for simple cases. Application logic would typically be put directly in your Controller or, less desirably, In your Domain Model (ew).
Resources
Every serious developer should have these books in his library:
Design Patterns: Elements of Reusable Object-Oriented Software
Patterns of Enterprise Application Architecture
Domain-Driven Design: Tackling Complexity in the Heart of Software
The model is how you represent the data of the application. It is the state of the application, the data which would influence the output (edit: visual presentation) of the application, and variables that can be tweaked by the controller.
To answer your question specifically
The content of the email, the person to send the email to are the model.
The code that sends the email (and verify the email/registration in the first place) and determine the content of the email is in the controller. The controller could also generate the content of the email - perhaps you have an email template in the model, and the controller could replace placeholder with the correct values from its processing.
The view is basically "An authentication email has been sent to your account" or "Your email address is not valid". So the controller looks at the model and determine the output for the view.
Think of it like this
The model is the domain-specific representation of the data on which the application operates.
The Controller processes and responds to events (typically user actions) and may invoke changes on the model.
So, I would say you want to put the code for the e-mail in the controller.
MVC is typically meant for UI design. I think, in your case a simple Observer pattern would be ideal. Your model could notify a listener registerd with it that a user has been registered. This listener would then send out the email.
The model is the representation of your data-storage backend. This can be a database, a file-system, webservices, ...
Typically the model performs translation of the relational structures of your database to the object-oriented structure of your application.
In the example above: You would have a controller with a register action. The model holds the information the user enters during the registration process and takes care that the data is correctly saved in the data backend.
The activation email should be send as a result of a successful save operation by the controller.
Pseudo Code:
public class RegisterModel {
private String username;
private String email;
// ...
}
public class RegisterAction extends ApplicationController {
public void register(UserData data) {
// fill the model
RegisterModel model = new RegisterModel();
model.setUsername(data.getUsername());
// fill properties ...
// save the model - a DAO approach would be better
boolean result = model.save();
if(result)
sendActivationEmail(data);
}
}
More info to the MVC concept can be found here:
It should be noted that MVC is not a design pattern that fits well for every kind of application. In your case, sending the email is an operation that simply has no perfect place in the MVC pattern. If you are using a framework that forces you to use MVC, put it into the controller, as other people have said.

Which one do you prefer for Searching/Reporting DataTable or DTO or Domain Class?

The project currently I am working in requires a lot of searhing/filtering pages. For example I have a comlex search page to get Issues by data,category,unit,...
Issue Domain Class is complex and contains lots of value objects and child objects.
.I am wondering how people deal with Searching/Filtering/Reporting for UI. As far As I know I have 3 options but none of them make me happier.
1.) Send parameters to Repository/DAO to Get DataTable and Bind DataTable to UI Controls.For Example to ASP.NET GridView
DataTable dataTable =issueReportRepository.FindBy(specs);
.....
grid.DataSource=dataTable;
grid.DataBind();
In this option I can simply by pass the Domain Layer and query database for given specs. And I dont have to get fully constructed complex Domain Object. No need for value objects,child objects,.. Get data to displayed in UI in DataTable directly from database and show in the UI.
But If have have to show a calculated field in UI like method return value I have to do this in the DataBase because I don't have fully domain object. I have to duplicate logic and DataTable problems like no intellisense etc...
2.)Send parameters to Repository/DAO to Get DTO and Bind DTO to UI Controls.
IList<IssueDTO> issueDTOs =issueReportRepository.FindBy(specs);
....
grid.DataSource=issueDTOs;
grid.DataBind();
In this option is same as like above but I have to create anemic DTO objects for every search page. Also For different Issue search pages I have to show different parts of the Issue Objects.IssueSearchDTO, CompanyIssueTO,MyIssueDTO....
3.) Send parameters to Real Repository class to get fully constructed Domain Objects.
IList<Issue> issues =issueRepository.FindBy(specs);
//Bind to grid...
I like Domain Driven Design and Patterns. There is no DTO or duplication logic in this option.but in this option I have to create lot's of child and value object that will not shown in the UI.Also it requires lot's ob join to get full domain object and performance cost for needles child objects and value objects.
I don't use any ORM tool Maybe I can implement Lazy Loading by hand for this version but It seems a bit overkill.
Which one do you prefer?Or Am I doing it wrong? Are there any suggestions or better way to do this?
I have a few suggestions, but of course the overall answer is "it depends".
First, you should be using an ORM tool or you should have a very good reason not to be doing so.
Second, implementing Lazy Loading by hand is relatively simple so in the event that you're not going to use an ORM tool, you can simply create properties on your objects that say something like:
private Foo _foo;
public Foo Foo
{
get {
if(_foo == null)
{
_foo = _repository.Get(id);
}
return _foo;
}
}
Third, performance is something that should be considered initially but should not drive you away from an elegant design. I would argue that you should use (3) initially and only deviate from it if its performance is insufficient. This results in writing the least amount of code and having the least duplication in your design.
If performance suffers you can address it easily in the UI layer using Caching and/or in your Domain layer using Lazy Loading. If these both fail to provide acceptable performance, then you can fall back to a DTO approach where you only pass back a lightweight collection of value objects needed.
This is a great question and I wanted to provide my answer as well. I think the technically best answer is to go with option #3. It provides the ability to best describe and organize the data along with scalability for future enhancements to reporting/searching requests.
However while this might be the overall best option, there is a huge cost IMO vs. the other (2) options which are the additional design time for all the classes and relationships needed to support the reporting needs (again under the premise that there is no ORM tool being used).
I struggle with this in a lot of my applications as well and the reality is that #2 is the best compromise between time and design. Now if you were asking about your busniess objects and all their needs there is no question that a fully laid out and properly designed model is important and there is no substitute. However when it comes to reporting and searching this to me is a different animal. #2 provides strongly typed data in the anemic classes and is not as primitive as hardcoded values in DataSets like #1, and still reduces greatly the amount of time needed to complete the design compared to #3.
Ideally I would love to extend my object model to encompass all reporting needs, but sometimes the effort required to do this is so extensive, that creating a separate set of classes just for reporting needs is an easier but still viable option. I actually asked almost this identical question a few years back and was also told that creating another set of classes (essentially DTOs) for reporting needs was not a bad option.
So to wrap it up, #3 is technically the best option, but #2 is probably the most realistic and viable option when considering time and quality together for complex reporting and searching needs.

LINQ To SQL entity objects as domain objects

Clearly separation of concerns is a desirable trait in our code and the first obvious step most people take is to separate data access from presentation. In my situation, LINQ To SQL is being used within data access objects for the data access.
My question is, where should the use of the entity object stop? To clarify, I could pass the entity objects up to the domain layer but I feel as though an entity object is more than just a data object - it's like passing a bit of the DAL up to the next layer too.
Let's say I have a UserDAL class, should it expose an entity User object to the domain when a method GetByID() is called, or should it spit out a plain data object purely for storing the data and nothing more? (seems like wasteful duplication in this case)
What have you guys done in this same situation? Is there an alternative method to this?
Hope that wasn't too vague.
Thanks a lot,
Martin.
I return IQueryable of POCOs from my DAL (which uses LINQ2SQL), so no Linq entity object ever leaves the DAL. These POCOs are returned to the service and UI layers, and are also used to pass data back into the DAL for processing. Linq handles this very well:
IQueryable<MyObjects.Product> products = from p in linqDataContext.Products
select new MyObjects.Product //POCO
{
ProductID = p.ProductID
};
return products;
For most projects, we use LINQ to SQL entities as our business objects.
The LINQ to SQL designer allows you to control the accessibility of the classes and properties that it generates, so you can restrict access to anything that would allow the consumer to violate the business rules and provide suitable public alternatives (that respect the business rules) in partial classes.
There's even an article on implementing your business logic this way on the MSDN.
This saves you from writing a lot of tedious boilerplate code and you can even make your entities serialisable if you want to return them from a web service.
Whether or not you create a separate layer for the business logic really depends on the size of your project (with larger projects typically having greater variation between the business logic and data access layers).
I believe LINQ to Entities attempts to provide a one-stop solution to this conundrum by maintaining two separate models (a conceptual schema for your business logic and a storage schema for your data access).
I personally don't like my entities to spread accross the layers. My DAL return POCO's (of course, it often means extra work, but I found this much cleaner - maybe that this will be simpler in the next .NET version ;-)).
The question is not so simple and there are lots of different thinking of the subject (I keep on asking myself the same question that you are).
Maybe you could take a look at the MVC Storefront sample app : I like the essence of the concept (the mapping that occurs in the data layer especially).
Hope this helps.
There is a similar post here, however, I see your question is more about what you should do, rather than how you should do it.
In small applications I find a second POCO implementation to be wasteful, in larger applications (particularly those that implement web services) the POCO object (usually a Data Transfer Object) is useful.
If your app falls into the later case, you may want to look at ADO.Net Data Services.
Hope that helps!
I have actually struggled with this, as well. Using plain vanilla LINQ to SQL, I quickly abandoned the DBML tooling, because it bound the entities to tightly to the DAL. I was striving for a higher level of persistence ignorance, although Microsoft didn't make it very easy.
What I ended up doing was hand-writing the persistence ignorance layer, by having the DAL inherit from my POCOs. The inherited objects exposed the same properties of the POCO it is inheriting from, so while inside the persistence ignorance layer, I could use attributes to map to the objects. The called then could cast the inherited object back to its base type, or have the DAL do that for them. I preferred the latter case, because it lessened the amount of casting that needed to be done. Granted, this was a primarily read-only implementation, so I would have to revisit it for more complex update scenarios.
The amount of manual coding for this is rather large, because I also have to manually maintain (after coding, to begin with) the context and provider for each data source, on top of the object inheritance and mappings. If this project was being deprecated, I would definitely move to a more robust solution.
Looking forward to the Entity Framework, persistence ignorance is a commonly requested feature according to the design blogs for the EF team. In the meantime, if you decide to go the EF route, you could always look at a pre-rolled persistence ignorance tool, like the EFPocoAdapter project on MSDN, to help.
I use a custom LinqToSQL generator, built upon one I found in the Internet, in place of the default MSLinqToSQLGenerator.
To make my upper layers independent of such Linq objects, I create interfaces to represent each one of them and then use such interfaces in these layers.
Example:
public interface IConcept {
long Code { get; set; }
string Name { get; set; }
bool IsDefault { get; set; }
}
public partial class Concept : IConcept { }
[Table(Name="dbo.Concepts")]
public partial class Concept
{
private long _Code;
private string _Name;
private bool _IsDefault;
partial void OnCreated();
public Concept() { OnCreated(); }
[Column(Storage="_Code", DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true)]
public long Code
{
//***
}
[Column(Storage="_Name", DbType="VarChar(50) NOT NULL")]
public string Name
{
//***
}
[Column(Storage="_IsDefault", DbType="Bit NOT NULL")]
public bool IsDefault
{
//***
}
}
Of course there is much more than this, but that's the idea.
Please keep in mind that Linq to SQL is not a forward looking technology. It was released, it's fun to play with, but Microsoft is not taking it anywhere. I have a feeling it won't be supported forever either. Take a look at the Entity Framework (EF) by Microsoft which incorporates some of the Linq to SQL goodness.

Resources