Clearly separation of concerns is a desirable trait in our code and the first obvious step most people take is to separate data access from presentation. In my situation, LINQ To SQL is being used within data access objects for the data access.
My question is, where should the use of the entity object stop? To clarify, I could pass the entity objects up to the domain layer but I feel as though an entity object is more than just a data object - it's like passing a bit of the DAL up to the next layer too.
Let's say I have a UserDAL class, should it expose an entity User object to the domain when a method GetByID() is called, or should it spit out a plain data object purely for storing the data and nothing more? (seems like wasteful duplication in this case)
What have you guys done in this same situation? Is there an alternative method to this?
Hope that wasn't too vague.
Thanks a lot,
Martin.
I return IQueryable of POCOs from my DAL (which uses LINQ2SQL), so no Linq entity object ever leaves the DAL. These POCOs are returned to the service and UI layers, and are also used to pass data back into the DAL for processing. Linq handles this very well:
IQueryable<MyObjects.Product> products = from p in linqDataContext.Products
select new MyObjects.Product //POCO
{
ProductID = p.ProductID
};
return products;
For most projects, we use LINQ to SQL entities as our business objects.
The LINQ to SQL designer allows you to control the accessibility of the classes and properties that it generates, so you can restrict access to anything that would allow the consumer to violate the business rules and provide suitable public alternatives (that respect the business rules) in partial classes.
There's even an article on implementing your business logic this way on the MSDN.
This saves you from writing a lot of tedious boilerplate code and you can even make your entities serialisable if you want to return them from a web service.
Whether or not you create a separate layer for the business logic really depends on the size of your project (with larger projects typically having greater variation between the business logic and data access layers).
I believe LINQ to Entities attempts to provide a one-stop solution to this conundrum by maintaining two separate models (a conceptual schema for your business logic and a storage schema for your data access).
I personally don't like my entities to spread accross the layers. My DAL return POCO's (of course, it often means extra work, but I found this much cleaner - maybe that this will be simpler in the next .NET version ;-)).
The question is not so simple and there are lots of different thinking of the subject (I keep on asking myself the same question that you are).
Maybe you could take a look at the MVC Storefront sample app : I like the essence of the concept (the mapping that occurs in the data layer especially).
Hope this helps.
There is a similar post here, however, I see your question is more about what you should do, rather than how you should do it.
In small applications I find a second POCO implementation to be wasteful, in larger applications (particularly those that implement web services) the POCO object (usually a Data Transfer Object) is useful.
If your app falls into the later case, you may want to look at ADO.Net Data Services.
Hope that helps!
I have actually struggled with this, as well. Using plain vanilla LINQ to SQL, I quickly abandoned the DBML tooling, because it bound the entities to tightly to the DAL. I was striving for a higher level of persistence ignorance, although Microsoft didn't make it very easy.
What I ended up doing was hand-writing the persistence ignorance layer, by having the DAL inherit from my POCOs. The inherited objects exposed the same properties of the POCO it is inheriting from, so while inside the persistence ignorance layer, I could use attributes to map to the objects. The called then could cast the inherited object back to its base type, or have the DAL do that for them. I preferred the latter case, because it lessened the amount of casting that needed to be done. Granted, this was a primarily read-only implementation, so I would have to revisit it for more complex update scenarios.
The amount of manual coding for this is rather large, because I also have to manually maintain (after coding, to begin with) the context and provider for each data source, on top of the object inheritance and mappings. If this project was being deprecated, I would definitely move to a more robust solution.
Looking forward to the Entity Framework, persistence ignorance is a commonly requested feature according to the design blogs for the EF team. In the meantime, if you decide to go the EF route, you could always look at a pre-rolled persistence ignorance tool, like the EFPocoAdapter project on MSDN, to help.
I use a custom LinqToSQL generator, built upon one I found in the Internet, in place of the default MSLinqToSQLGenerator.
To make my upper layers independent of such Linq objects, I create interfaces to represent each one of them and then use such interfaces in these layers.
Example:
public interface IConcept {
long Code { get; set; }
string Name { get; set; }
bool IsDefault { get; set; }
}
public partial class Concept : IConcept { }
[Table(Name="dbo.Concepts")]
public partial class Concept
{
private long _Code;
private string _Name;
private bool _IsDefault;
partial void OnCreated();
public Concept() { OnCreated(); }
[Column(Storage="_Code", DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true)]
public long Code
{
//***
}
[Column(Storage="_Name", DbType="VarChar(50) NOT NULL")]
public string Name
{
//***
}
[Column(Storage="_IsDefault", DbType="Bit NOT NULL")]
public bool IsDefault
{
//***
}
}
Of course there is much more than this, but that's the idea.
Please keep in mind that Linq to SQL is not a forward looking technology. It was released, it's fun to play with, but Microsoft is not taking it anywhere. I have a feeling it won't be supported forever either. Take a look at the Entity Framework (EF) by Microsoft which incorporates some of the Linq to SQL goodness.
Related
I have the projects
Domain.Model (contains code first POCOs)
Data.Context (contains the context & migrations only)
Data.Access (contains IGenericRepository & GenericRepository)
Service (contains BL service classes and UnitsOfWork)
Presentation.Admin (an Asp.Net Webforms web application)
I am using my POCOs as business objects across all the layers. I know there is some debate about this but this also fairly accepted.
So I have the Presentation calling Service > Getting a POCO via the Repository > Returning to Presentation and displaying, for example a HTML table and saving edits back to the DB - great.
Now I have a more complex page which I think requires a business object. This is a made-up by analogous example.
POCO
public class Book
{
BookId
string ExternalReference
}
public class Movie
{
int MovieId
string ExternalReference
}
Suggested Business Object
public MovieAdaptation
{
Book book;
Movie movie;
}
So ExternalReference is external and can not be a common foreign key in my database so therefore i cannot just do Book.Movie using a navigation property. I need to do a LINQ join (probably).
So my questions are:
1) Where should I define this business object. Currently it is just in the Service layer as only things that reference the service layer will use it.
2) Where should I construct this business object? Should it be in repositories which sit in Data.Access or further up?
3) How do I construct it using LINQ. Here is my best shot so far, but it seems pretty inefficient, especially if I am returning a list of these.
namespace MyProject.Services
{
public class AdaptationsService
{
AdaptationUnitOfWork _unitOfWork;
public AdaptationService
{
unitOfWork = new AdaptationUnitOfWork();
}
public Adaption GetAdaptations(string externalReference)
{
//Can anyone improve this maybe using LINQ join (as maybe it won't be getting books/movies by SingleOrDefault but by where
Book book= _unitOfWork.BookRepository.Get.SingleOrDefault(b=>b.ExternalReference==externalReference);
Movie movie= _unitOfWork.MovieRepository.Get.SingleOrDefault(m=>m.ExternalReference==externalReference);
Adaptation adaptation = new Adaptation();
adaptation.Book=book;
adaptation.Movie=movie;
}
}
}
1) Where should I define this business object. Currently it is just in the Service layer as only things that reference the service layer will use it.
I would probably keep it in the Service Layer. It's not relevant to your DAL as it's a combination of your POCO's, not the POCO's themselves. Have the Service layer construct/destruct it (see below).
2) Where should I construct this business object? Should it be in repositories which sit in Data.Access or further up?
Construct and destruct it in the Service Layer. The DAL should only send and receive your data access objects (your POCOs). Constructing and destructing business objects is not part of it's job description. For all intents and purposes whenever you use the term business object it should be above the DAL.
3) How do I construct it using LINQ. Here is my best shot so far, but it seems pretty inefficient, especially if I am returning a list of these.
I don't have a better answer than the example you gave. It sounds like you have to perform 2 queries then construct it yourself as you're doing.
I have an existing database, which I have been happily accessing using LINQtoSQL. Armed with Sanderson's MVC3 book I thought I'd have a crack at EF4.3, but am really fighting to get even basic functionality working.
Working with SQL 2008, VS2010, the folder architecture appears to be:
ABC.Domain.Abstract
ABC.Domain.Concrete
ABC.Domain.Concrete.ORM
ABC.Domain.Entities
Per examples, repository interfaces are abstract, actual repositories are concrete. Creating EDMX from the existing database puts that in the ORM folder and the Entities holds the classes I designed as part of the domain. So far so good.
However! I have not once persuaded the deceptively simple EfDbContext : DbContext class, with method to work...
public DbSet<ABC.Domain.Entities.Person> Person { get { return _context.Persons; }}
It complains about missing keys, that Person is not a entity class, that it cannot find the conceptual model, and so on.
Considering I have a basic connectionstring in the web.config, why is not creating a model on the fly to do simple matching?
Should the ORM folder exist, or should it simply be Concrete? (I have a .SQL subfolder for LINQtoSQL concret so it suits me to have .ORM but if it's a flaw, let's fix it).
Should I have my homespun entities AND the automatically produced ones or just one set?
The automatic ones inherit from EntityObject, mine are just POCO or POCO with complexTypes, but do not inherit from anything.
What ties the home designed Domain.Entities.Person type to the Persons property of the Context?
Sanderson's book implies that the matching is implicit if properties are identical, which they are, but that does not do it.
The app.config has an EF flavoured connection string in it, the web.config has a normal connection string in it. Which should I be using - assuming web.config at the moment - so do I delete app.config?
Your help is appreciated. Long time spent, no progress for some days now.
What ties the home designed Domain.Entities.Person type to the Persons
property of the Context?
You seem to have a misunderstanding here. Your domain entities are the entities for the database. There aren't two sets. If you actually want to have two sets of object classes (for whatever reason) you must write any mapping between the two manually. EF only knows about the classes which are part of the entity model.
You should also - if you are using EF 4.3 - apply the DbContext Generator T4 template to the EDMX file. Do not work with EntityObject derived entities! It is not supported with DbContext. The generator will build a set of POCO classes and prepare a derived DbContext. This set of POCO classes are the entities the DbContext will only know about and they should be your only set of domain entities.
The created DbContext will contain simple DbSet properties with automatic getters and setters...
public DbSet<Person> People { get; set; }
...and the Person class will be created as POCO as well.
Download the entity framework power tools:
http://visualstudiogallery.msdn.microsoft.com/72a60b14-1581-4b9b-89f2-846072eff19d
Right click in your project to 'reverse engineer an existing database' it will create the code classes for you. No need to use EDMX ,and this method will create the DbContext derived class for you
There are many questions here and you won't get an answer, but I'll stick my 5 pence for what it's worth.
Sanderson's MVC3 book
Your problems are not to do with MVC3, they are to do with Entity Framework and data persistence layer.
ABC.Domain.Abstract ABC.Domain.Concrete ABC.Domain.Concrete.ORM
ABC.Domain.Entities
Can you say why this is separated in such a way? I would argue and say that ABC.Domain should contain your POCOs independent of your persistence layer (EF) and your presentation layer (MVC). Your list implies that your domain contains ORM and your data access entities. I'm not arguing here, what I'm trying to say, is that you need to understand what you really need.
At the end of a day, I'm certain that a simple example would suffice with ABC.DataAccess, ABC.Domain and ABC.Site.
Do you understand why repositories are abstract and concrete? If you don't, then leave out interfaces and see whether you can improve it with interfaces later.
Person is not a entity class, that it cannot find the conceptual
model, and so on.
Now, there are multiple ways you can get EF to persist data for you. You can use code first, where, as the name implies, you will write code first, and EF will generate database, relations and all the relevant constraints for you.
You can use database first, where EF will generate relevant class and data access related objects from your database. This is less preferable method for me, as it relies heavily upon your database structure.
You can use model first, where you will design your class in EDMX designer and it will then generate relevant SQL for you.
All of these might sound like a bit of black box, but for what you are trying to achieve all of them will work. EDMX is a good way to learn and there are many step by step tutorials on ASP.Net.
but if it's a flaw, let's fix it).
You will have to fix and refactor yourself, there is no other way to improve in my honest opinion. I can give you a different folder/namespace structure, but there will always be a "better" one.
Should I have my homespun entities AND the automatically produced ones
or just one set?
Now this depends on the model that you have chosen. Database first, code first, code only and whatever else is there. If you are following domain driven development, then you will have to work with classes, that represent your business logic and that are not tied up to your data persistence layer or presentation layers, therefore POCO is a way forward.
What ties the home designed Domain.Entities.Person type to the Persons
Now this again depends on the model that you are using.
The app.config and web.config
When you are running your web application, the connection string from web application will be used. Please correct me if I'm wrong.
Your help is appreciated. Long time spent, no progress for some days
now.
General advise, leave MVC alone for the time being. Get it to work in a console application and make sure you feel comfortable with options offered in EF. Good luck :)
The solution to why nothing worked code-first...
...turned out to be a reference to System.Data.EntityClient in the connection string, which ought to have read System.Data.SqlClient.
Without this provider entry being correct, it was unable to work code-first.
Finding which connectionString it was using was a case of deliberately mis-spelling a keyword in the connections there were to choose from - they all were named correctly - but were in app.config, and 2 places in the web.config. With a distinct naming error, when the application threw an error trying to create the domain model, it was easy to identify which connection string my derived DbContext class was using. Correcting the ProviderName made all the difference.
Code-first is now working just fine, with seeded values on model changes.
Slogging through MVC+EF and trying to focus on doing things the right way. Right now I'm looking to add a dropdown to a form but I'd like to avoid hitting the database every time the page loads so I'd like to store the data in the app level. I figure creating an application level variable isn't the best approach. I've read about using the cache and static utility functions but surprisingly, nothing has sounded terribly definitive. (Static classes bad for unit testing, caching bad
So I have two scenarios that I'm curious about, I'm not sure if the approach would differ between the two.
1) A basic lookup, let's say the fifty states. Small, defined, will never change. Load at application startup. (Not looking for a hard coded solution but retrieval from the database.)
2) A lookup that will very rarely change and only via an admin-like screen. Let's say, cities/stores where your product is being sold. So data would be stored
in the model but would be relatively static unless someone made changes via the application. So not looking to hit the database every time I need to populate a dropdown/listbox.
Seems like basic stuff but it's basically the same as this topic that was never answered:
Is it good to use a static EF object context in an MVC application for better perf?
Any help is appreciated.
I will address you question in a few parts. First off, is it inherently bad to use static variables or caching patterns in MVC. The answer is simply no. As long as your architecture supports them it is OK. Just put your cache in the right place and design for testability as I will explain later.
The second part is what is the "right" way to have this type of persisted data stored so you don't have to make round trips to the DB to populate common UI items. For this, I don't recommend storing EF objects. I would create POCO objects (View models or similar) that you cache. So in the example of your 50 states you might have something like this:
public class State
{
public string Abbreviation { get; set; }
public string Name { get; set; }
}
Then you would do something like this to create your cached list:
List<State> states = Context.StateData.Select(s => new State { Abbreviation = s.Abbreviation, Name = s.Name}).ToList();
Finally, whatever your caching solution is, it should implement an interface so you can mock that caching method for testing.
To do this without running into circular references or using reflection, you will need at least 3 assemblies:
Your MVC application
A class library to define your POCO objects and interfaces
A class library do perform your data access and caching (this can obviously be split into 2 libraries if that makes it easier to maintain and/or test)
That way you could have something like this in your MVC code:
ICache myCache = CacheFactory.CreateCache();
List<State> states = myCache.ListStates();
// populate your view model with states
Where ICache and State are in one library and your actual implementation of ICache is in another.
This is what I do for my standard architecture: splitting POCO objects and interfacees which are data access agnostic into a separate library from data access which is the separate from my MVC app.
Look into using a Dependency Injection tool such as unity, ninject, structuremap, etc. These will allow for the application level control you are looking for by implementing a kernel which holds on to objects in a very similar way to what you seem to be describing.
I have a semi complicated question regarding Entity Framework4, Lambda expressions, and Data Transfer Objects (DTO).
So I have a small EF4 project, and following established OO principles, I have a DTO to provide a layer of abstraction between the data consumers (GUI) and the data model.
VideoDTO = DTO with getters/setters, used by the GUI
VideoEntity = Entity generated by EF4
My question revolves around the use of the DTO by the GUI (and not having the GUI use the Entity at all), combined with a need to pass a lambda to the data layer. My data layer is a basic repository pattern with Add. Change, Delete, Get, GetList, etc.
Trying to implement a Find method with a signature like so:
public IEnumerable<VideoDTO> Find(Expression<Func<VideoEntity, bool>> exp)
...
_dataModel.Videos.Where(exp).ToList<Video>()
---
My problem/concern is the "exp" needing to be of type VideoEntity instead of VideoDTO. I want to preserve the separation of concerns so that the GUI does not know about the Entity objects. But if I try to pass in
Func<VideoDTO, bool>
I cannot then do a LINQ Where on that expression using the actual data model.
Is there a way to convert a Func<VideoDTO,bool> to a Func<VideoEntity, bool>
Ideally my method signature would accept Func<VideoDTO, bool> and that way the GUI would have no reference to the underlying data entity.
Is this clear enough? Thanks for your help
Thanks for the repliesto both of you.
I'll try the idea of defining the search criteria in an object and using that in the LINQ expression. Just starting out with both EF4 and L2S, using this as a learning project.
Thanks again!
In architectures like CQRS there isn't need for such a conversion at all cause read & write sides of app are separated.
But in Your case, You can't runaway from translation.
First of all - You should be more specific when defining repositories. Repository signature is thing You want to keep explicit instead of generic.
Common example to show this idea - can You tell what indexes You need in Your database when You look at Your repository signature (maybe looking at repository implementation, but certainly w/o looking at client code)? You can't. Cause it's too generic and client side can search by anything.
In Your example it's a bit better cause expression genericness is tied with dto instead of entity.
This is what I do (using NHibernate.Linq, but the idea remains)
public class Application{
public Project Project {get;set;}
}
public class ApplicationRepository{
public IEnumerable<Application> Search(SearchCriteria inp){
var c=Session.Linq<Application>();
var q=c.AsQueryable();
if(!string.IsNullOrEmpty(inp.Acronym))
q=q.Where(a=>a.Project.Acronym.Contains(inp.Acronym));
/*~20 lines of similar code snipped*/
return q.AsQueryable();
}
}
//used by client
public class SearchCriteria{
public string Acronym{get;set;}
/*some more fields that defines how we can search Applications*/
}
If You do want to keep Your expressions, one way would be to define dictionary manually like this:
var d=new Dictionary<Expression<Func<VideoDTO,object>>,
Expression<Func<VideoEntity,object>>{
{x=>x.DtoPropNumberOne,x=>x.EntityPropNumberOne} /*, {2}, {3}, etc.*/
};
And use it later:
//can You spot it?
//client does not know explicitly what expressions dictionary contains
_dataModel.Videos.Where(d[exp]).ToList<Video>();
//and I'm not 100% sure checking expression equality would actually work
If You don't want to write mapping dictionary manually, You will need some advanced techniques. One idea would be to translate dto expression to string and then back to entity expression. Here are some ideas (sorting related though) that might help. Expressions are quite complicated beasts.
Anyway - as I said, You should avoid this. Otherwise - You will produce really fragile code.
Perhaps your design goal is to prevent propagation of the data model entities to the client tier rather than to prevent a dependency between the presentation layer and data model. If viewed that way then there would be nothing wrong with the query being formed the way you state.
To go further you could expose the searchable fields from VideoEntity via an interface (IVideoEntityQueryFields) and use that as the type in the expression.
If you don't want to add an interface to your entities then the more complicated option is to use a VideoEntityQuery object and something that translates an Expression<Func<VideoEntityQuery,bool>> to an Expression<Func<VideoEntity,bool>>.
I keep hearing about EF 4.0, POCO, IObjectSet, UnitOfWork (by the way, UoW is atleast more than 17 years old when I first heard it) etc.
So some folks talk about Repository "pattern". etc. There are numerous bloggers showcasing their concoction of a "wrapper" or repository or something similar.
But they all require IObjectSets (or in some cases - IQueryables) to be hanging off their POCOs. Expectation seems to be that you can write queries against them.
So if one needs IObjectSet and not just IList or some other simpler collection, why are we saying this is POCO and free from EF?
If I want to swap EF from underneath, I need to make sure my "other" O/R Mapper (I know I know.. EF is not just an O/R Mapper) understands IObjectSet and be able to parse the ExpressionTrees from the queries, execute and otherwise behave similar to EF.
IObjectSet is not the interface that makes an Entity POCO, it's just the persistence container IObjectSet. The point of POCO is to prevent you from having to derive your Model classes from an EF type, which the T4 POCO template in EF4 provides.
The Repository pattern is an optional additional layer of abstraction from your ORM to allow easier implementation of a different one if the need arose. Separation of concerns etc etc.
Take a look at Entity Framework Code First: http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx
In response to the phrase: "If I want to swap EF from underneath":
In my business, it is more likely that I would swap out the database, say from Oracle to SQL Server (or vice versa), than that I would swap out the data access framework. On the other hand, there do exist options that make EF a favorable choice.
There are other LINQ providers than those provided by EF (e.g. LLBLGen). Sure, swapping out an EF data tier for NHibernate or EasyObjects would be difficult, because the frameworks do not have sufficient feature parity to ease the transition; however, LINQ was designed to open the way for other LINQ providers to step in and provide their own solution.
Your question contains a wrong statement: Correct is that POCOs do not depend on IObjectSet.
POCOs themselves are independent from EF. Or better: They are supposed to be independent from EF. Since YOU are implementing the POCO classes you are finally responsible to make this sure. (Otherwise the term POCO would be the wrong one.)
If you are using the standard T4 template to create POCO classes from a model description instead of writing the classes on your own the template ensures that the classes do not depend on EF - they are not derived from Entity and collections as members of a class are generated with ICollection by this template, not with IObjectSet.
Repository pattern is another question. The POCO T4 template does not create a Repository as an abstract interface to act on a database with POCOs. It creates a derived ObjectContext which is rather an EF specific implementation of a possible repository interface (or at least helps to easily implement a possible repository interface).
If you want to have a repository interface which doesn't depend on EF or LINQ you have to define it this way. Nothing forces you to use IObjectSet or IQueryable in that interface. Perhaps the examples of implementing the Repository pattern you saw didn't intend to be independent from Entity Framework or LINQ.
An example:
Suppose, in your business layer you need a list of all products of a given category returned from the persistance layer. What would this layer expose to fulfill the request?
If you only have databases in mind which offer a LINQ provider you might design the repository interface like so:
public interface IProductsRepository
{
IQueryable<Product> AllProducts { get; } // Product is the POCO class
}
A concrete implementation of this repository based on EF would simply return an ObjectSet<Product> from the ObjectContext which the T4 template did create.
And your business layer runs a query this way:
IProductsRepository rep = new SomeConcreteImplementationOfProductsRepository();
IList productsOfCategory =
rep.AllProducts.Select(p => p.Category == "stuff").ToList();
But if you want to be more open what kind of persistance storage you like to support it might be better to design the repository independent from IQueryable. The consequence could be that your abstract repository interface needs more specific methods to answer requests from the business layer, for instance you need now:
public interface IProductsRepository
{
IList<Product> GetProductsOfCategory(string category);
}
and the business layer does this:
IProductsRepository rep = new SomeConcreteImplementationOfProductsRepository();
IList productsOfCategory = rep.GetProductsOfCategory("stuff");
A concrete implementation of this Repository using EF (or another data framework supporting LINQ) could still leverage a LINQ query like the business layer did in the first example. But other implementations could work in another way (say: you have a "database" which stores products in one text file per category. Then the implementation for that interface method would read one specific file from disk. Or your repository implementation asks a webservice for the data, and so on...)
Key point is: If you are using POCO classes you are open for all those kinds of repositories. EF with POCO support doesn't force you to build repository interfaces based on IQueryable or even IObjectSet. It finally depends on what kind of persistance layers you have in mind. The more different they are the more specific methods you might need to support in your repository interface and the more work you'll have to implement those methods. Using IQueryable is a comfortable compromise which allows to define a simple repository interface while enabling simple implementations by EF but also other databases with LINQ provider. I think that's the only reason why you see examples of repository pattern implementations with IQueryable so often. It's not an inherent restriction imposed by EF with POCOs.
(That's how I think about it, not being an expert in design patterns, so heavy attacks and corrections in the comments are welcome.)