We're working on a large windows forms .NET application with a very large database. Currently we're reaching 400 tables and business objects but that's maybe 1/4 of the whole application.
My question now is, how to handle this large amount of mapping files with NHibernate with performance and memory usage in mind?
The business objects and their mapping files are already separated in different assemblies. But I believe that a NH SessionFactory with all assemblies will use a lot memory and the performance will suffer. But if I build different Factories with only a subset of assemblies (maybe something like a domain context, which separates the assemblies in logic parts) I can't exchange objects between them easily, and only have access to a subset of objects.
Our current approach is to separate the business objects with the help of a context attribute. A business object can be part of multiple contexts. When a SessionFactory is created all mapping files of a given context (one or more) are merged into one large mapping file and compiled to a DLL at runtime. The Session itself is then created with this new mapping DLL.
But this approach has some serious drawbacks:
The developer has to take care of the assembly references between the business object assemblies ;
The developer has to take care of the contexts or NHibernate will not find the mapping for a class ;
The creation of the new mapping file is slow ;
The developer can only access business objects within the current context - any other access will result in an exception at runtime.
Maybe there is a complete different approach? I'll be glad about any new though about this.
The first thing you need to know is that you do not need to map everything. I have a similar case at work where I mapped the main subset of objects/tables I was to work against, and the others I either used them via ad-hoc mapping, or by doing simple SQL queries through NHibernate (session.createSqlQuery). The ones I mapped, a few of them I used Automapper, and for the peskier ones, regular Fluent mapping (heck, I even have NHibernate calls which span across different databases, like human resources, finances, etc.).
As far as performance, I use only one session factory, and I personally haven't seen any drawbacks using this approach. Sure, Application_Start takes more than your regular ADO.NET application, but after that's, it's smooth sailing through and through. It would be even slower to start open and closing session factory on demand, since they do take a while to freshen up.
Since SessionFactory should be a singleton in your application, the memory cost shouldn't be that important in the application.
Now, if SessionFactory is not a singleton, there's your problem.
Related
I guess traditionally, one would for a RESTful web service use one type of DTO objects for POJO/JSON conversion, and a separate DTO object for database entity/POJO conversion?
Spring Boot should be more opinionated and easier to use, but would you still use different DTO object types for JSON and database entity representation, or do you convert entity objects directly to JSON?
Let me share my opinion.
At first, I think your question has nothing to do with spring boot. Spring boot just provides a fancy and lightweight way to start the application and allows to build the app in an easier manner.
But still you have your rest controller there and from that point it doesn't differ much for any other type of application.
So what you're actually asking is whether it makes sense to maintain an abstraction of JSON objects and converting them to the Business Logic Entity objects and later on converting them once again to Database objects or its enough to maintain only 2 levels and ditch the Json level.
I think the answer is "it depends".
First of all, In general currently the trend is a simplification. So maybe its enough to maintain only 1 level of objects.
There are a lot of advantages of such an approach:
Obviously less code to maintain
Speed of development and testing (POJOs should be checked, converters should be tested and so forth)
Speed of execution - you don't need to waste the CPU time on conversion. A kind of obvious implication.
Less obvious: Memory consumption. Lets say you work with a big bulk of data returned by your DAO. Let's say it occupies 10MB of memory (just for the sake of example). Now if you start to convert, to Business Entities, you'll spend yet another 10MB and now if its A JSon objects, well its again 10MB. The point is that all these objects may co-exist in memory simultaneously. Of course GC will probably take care of them if you implemented everything right, but this is a different story.
However there is one drawback of such a simplification.
In one word I would call it a Commitment
There are three Types of APIs in the application.
The API you're committed to at the level of Web Service - The JSon structure.
The chances are that various clients (not necessary using the JVM at all) are running against your Web service and consume the data. So they really expect you to provide a JSon objects of the given structure.
The API of your business. If your Business logic layer is pretty complicated, you probably have an entire team that develops that logic. So you usually work at the level of APIs between the teams.
The level of DAO - the same story as Business Logic actually.
So now, what happens if you, say, change that API at one level. Does it mean that all the levels will be broken?
Example
Lets say, we don't maintain "JSon" level. In this case, if we change the API at the level of Business Logic, the JSON will also change automatically. All the rest frameworks will happily convert the object for us, and the chances are that the user will get another data.
Another example
Lets say, your BL layer provides a Person entity that looks like this:
class Person {
String firstName;
String lastName;
List<Language> languages;
}
class Language {
...
}
Now, let's say you have a UI that consumes your REST service that provides a list of Persons upon request. What if there are 2 different pages in UI. One that shows only the Persons (in this case it doesn't make sense to provide a list of language, spoken by a person).
In the second page however you want to get the full information.
So, you'll end up exposing 2 web services or complicating the existing one by some parameters (the more params like this you have, the less it resembles the rest :) )
Maybe separation would help a little here? I don't know.
Bottom line.
I would say that as long as you can live without such a separation - do it. It can work even for quite big projects. And of course it can work for small or middle-sized projects.
If you find yourself struggling around fixes and you feel like such a separation would solve the issues - do the separation.
Hope this helps to understand the implications and chose what works for you
I'm trying to introduce caching into an existing server application because the database is starting to become overloaded.
Like many server applications we have the concept of a data layer. This data layer has many different methods that return domain model objects. For example, we have an employee data access object with methods like:
findEmployeesForAccount(long accountId)
findEmployeesWorkingInDepartment(long accountId, long departmentId)
findEmployeesBySearch(long accountId, String search)
Each method queries the database and returns a list of Employee domain objects.
Obviously, we want to try and cache as much as possible to limit the number of queries hitting the database, but how would we go about doing that?
I see a couple possible solutions:
1) We create a cache for each method call. E.g. for findEmployeesForAccount we would add an entry with a key account-employees-accountId. For findEmployeesWorkingInDepartment we could add an entry with a key department-employees-accountId-departmentId and so on. The problem I see with this is when we add a new employee into the system, we need to ensure that we add it to every list where appropriate, which seems hard to maintain and bug-prone.
2) We create a more generic query for findEmployeesForAccount (with more joins and/or queries because more information will be required). For other methods, we use findEmployeesForAccount and remove entries from the list that don't fit the specified criteria.
I'm new to caching so I'm wondering what strategies people use to handle situations like this? Any advice and/or resources on this type of stuff would be greatly appreciated.
I've been struggling with the same question myself for a few weeks now... so consider this a half-answer at best. One bit of advice that has been working out well for me is to use the Decorator Pattern to implement the cache layer. For example, here is an article detailing this in C#:
http://stevesmithblog.com/blog/building-a-cachedrepository-via-strategy-pattern/
This allows you to literally "wrap" your existing data access methods without touching them. It also makes it very easy to swap out the cached version of your DAL for the direct access version at runtime quite easily (which can be useful for unit testing).
I'm still struggling to manage my cache keys, which seem to spiral out of control when there are numerous parameters involved. Inevitably, something ends up not being properly cleared from the cache and I have to resort to heavy-handed ClearAll() approaches that just wipe out everything. If you find a solution for cache key management, I would be interested, but I hope the decorator pattern layer approach is helpful.
So I was searching the web looking for best practices when implementing the repository pattern with multiple data stores when I found my entire way of looking at the problem turned upside down. Here's what I have...
My application is a BI tool pulling data from (as of now) four different databases. Due to internal constraints, I am currently using LINQ-to-SQL for data access but require a design that will allow me to change to Entity Framework or NHibernate or the next data access du jour. I also hold steadfast to decoupled layers in my apps using an IoC framework (Castle Windsor in this case).
As such, I've used the Repository pattern to abstract the actual data access code from my business layer. As a result, my business object is coded against some I<Entity>Repository interface and the IoC Container is used to manage the actual implementation. In this case, I would expect to have a concrete Linq<Entity>Repository that implements the interface using LINQ-to-SQL to do the work. Later I could replace this with an EF<Entity>Repository with no changes required to my business layer.
Also, because I'm coding against the interface, I can easily mock the repository for unit testing purposes.
So the first question that I have as I begin coding the application is whether I should have one repository per DataContext or per entity (as I've typically done)? Let's say one database contains Customers and Sales with the expected relationship. Should I have a single OrderTrackingRepository with methods that work with both entities or have a separate CustomerRepository and a different SalesRepository?
Next, as a BI tool, the primary interface is for reporting, charting, etc and often will require a "mashup" of data across multiple sources. For instance, the reality is that one database contains customer information while another handles sales information and a third holds other financial information but one of my requirements is to display aggregated information that spans all three. Plus, I have to support dynamic filtering in the UI. Obviously working directly against the LINQ-to-SQL or EF DataContext objects (Table<Entity>, for instance) will allow me to pretty much do anything. What's the best approach to expose that same functionality to my business logic when abstracting the DAL with a repository interface?
This article: link text indicates that EF4 has turned this approach around and that the repository is nothing more than an IQueryable returned from the EF DataContext which brings up a whole other set of questions.
But, I think I've rambled on enough...
UPDATE (Thanks, Steven!)
Okay, let me put a more tangible (for me, at least) example on the table and clarify a few points that will hopefully lead to an approach I can better wrap my head around.
While I understand what Steven has proposed, I have a team of developers I have to consider when implementing such things and I'm afraid they will get lost in the complexity (yes, a real problem here!).
So, let's remove any direct tie-in with Linq-to-Sql because I don't want a solution that is dependant upon the way L2S works - or even EF, for that matter. My intent has been to abstract away the data access technology being used so that I can change it as needed without requiring collateral changes to the consuming code in my business layer. I've accomplished this in the past by presenting the business layer with IRepository interfaces to work against. Perhaps these should have been named IUnitOfWork or, more to my liking, IDataService, but the goal is the same. These interfaces typically exposed methods such as Add, Remove, Contains and GetByKey, for example.
Here's my situation. I have three databases to work with. One is DB2 and contains all of the business information for a customer (franchise) such as their info and their Products, Orders, etc. Another, SQL Server database contains their financial history while a third SQL Server database contains application-specific information. The first two databases are shared by multiple applications.
Through my application, the customer may enter/upload their financial information for a given time period. When entered, I have to perform the following steps:
1.Validate the entered data against a set of static rules. For example, the data must contain a legitimate customer ID value (in the case of an upload). This requires a lookup in the DB2 database to verify that the supplied customer ID exists and is current.
2.Next I have to validate the data against a set of dynamic rules which are contained in the third (SQL Server) database. An example may be that a given value cannot exceed a certain percentage of another value.
3.Once validated, I persist the data to the second SQL Server database containing the financial data.
All the while, my code must have loosely-coupled dependencies so I may mock them in my unit tests.
As part of the analysis, I know that I have three distinct data stores to work with and about a half-dozen or so entities (at this time) that I am working with. In generic terms, I presume that I would have three DataContexts in my application, one per data store, with the entities exposed by the appropriate data context.
I could then create a separate I{repository|unit of work|service} for each entity that would be consumed by my business logic with a concrete implementation that knows which data context to use. But this seems to be a risky proposition as the number of entities increases, so does the number of individual repository|UoW|service types.
Then, take the case of my validation logic which works with multiple entities and, thereby, multiple data contexts. I'm not sure this is the most efficient way to do this.
The other requirement that I have yet to mention is on the reporting side where I will need to execute some complex queries on the data stores. As of right now, these queries will be limited to a single data store at a time, but the possibility is there that I might need to have the ability to mash data together from multiple sources.
Finally, I am considering the idea of pulling out all of the data access stuff for the first two (shared) databases into their own project and have been looking at WCF Data Services as a possible approach. This would give me the basis for a consistent approach for any application making use of this data.
How does this change your thinking?
In your case I would recommend returning IEnummerables's for your data queries for the repo. I usually aggregate calls from multiple repo's through a service class that represents the domain problem and encapsulates my business logic. To keep it clean I try keep my repros focused on the domain problem. I liken my Datacontext to a repo, and extract an interface using a T4 template to make life easier for mocking. But there is nothing stopping you using a traditional repo that encapsulates your calls. Doing it this way will allow you to switch ORM's at any stage.
EDIT: IQueryable IS NOT THE ANSWER! :-)
I have also done a lot of work in this area, and INITIALLY came to the same conclusion, however it is NOT a good solution. The point of the Repo is to abstract queries into discrete chunks of work. Exposing IQueryable is too adhoc and raises some issues later down the line. You loose your ability to scale. You loose your ability to optimize queries (Lets say I want to move to a highly optimized stored proc). You loose your ability to use IoC for the repo to switch out data access layers (switch the project from SQL to Mongo). You loose your ability to provide effective data caching in the Repo (Which is a major strength in the Repo pattern). I would recommend taking a CLOSE look as to WHY we have a Repo pattern. It isn't simply an "ORM" mapping layer. What made this really clear to me was the CQRS pattern.
Further to this allowing the ad-hoc nature of IQueryable opens you to misfitting reuse of queries. It is GENERALLY not a good idea to reuse queries, since query to query you see slight deviations, which ends up with 2 byproducts: Queries become too broad and inefficient. Queries become riddled with unmaintainable IF THEN statements to cater for the deviations.
IQueryable is easy, but opens you up to an unmaintainable mess.
Look at this SO answer. I think it shows a simplified model of what you want. IQueryable<T> is indeed our new Repository :-). DataContext and ObjectContext are our Unit of Work.
UPDATE 2:
Here is a blog post that describes the model you might be looking for.
UPDATE 3
It would be wise to hide the shared databases behind a service. This will solve several problems:
This will make the database private to the service, which makes it much easier to change the implementation when needed.
You can put the needed validation logic (for database 1) in that service and can create tests for that validation logic in that project.
Clients accessing that service can assume correctness of the service, and its validation logic.
The result of this is that your application will send data to the service to validate it. Call the service to fetch data. Query its own private database (database 3) and join the data of the three data source locally together. I've never been a fan of using cross-database or even cross-server (in your situation) database calls and letting the database join everything together. Transactions will be promoted to distributed-transactions and it's hard to predict how many data the servers will exchange.
When you abstract the shared databases behind the service, things get easier (at least from your application's point of view). Your application calls services it trusts which limits the amount of code in that application and the amount of tests. You still want to mock the calls to such a service, but that would be pretty easy. It should also solve the problem of validating over multiple data sources.
Validation is always a hard part. I'm very familiar with Validation Application block, and love it for it's flexibility. It isn't however an easy framework, but you might take a peek at what you can do with it. For instance, I've written several articles about integration with O/RM tools and how to 'embed' a context (context as in DataContext/Unit of Work) in Validation Application Block.
Please have a look at my IRepository pattern implementation using EF 4.0.
My solution has the following features:
supports connections to multiple dbs
One repository per entity
Support for execution of queries
Unit of work pattern implementation
Support for validating entities using VAB guidance
Common operations are kept at base class level. High use of OOPS techniques for code re-usability and ease of maintenance.
We're going to be rebuilding one of our sites in .Net. I've read many articles and really like the idea of separating our project into a data access layer (DAL), Business logic layer (BLL), and presentation layer (we're coming from classic ASP so this is a massive step for us). I also really like Linq to SQL.
Since Linq to SQL is aimed at rapid development, is it really possible with Linq to SQL to have a DAL, BLL, and presentation layer? With Linq to SQL would the DAL return the entities or the linq code which could be possibly modified in the BLL? The relationship between the DAL and BLL with Linq to SQL seems to be a fuzzy topic with no consensus - and since this is such a big jump for us I definitely want to have a good game plan before diving into anything.
Typed Datasets seem more equipped for this, but if I can get something similar with Linq I'd go that route.
I'd like to stay away from nHibernate and other 3rd party libraries.
We're building exactly what you described, and we're using L2S to do it. Agreed that the relationship between the DAL and BLL is a bit fuzzy, but we have a distinct BLL and a distinct DAL. All our logic is in the BLL and all data retrieval/modification is done via calls to the DAL (using LINQ calls).
Our app uses no typed datasets. We've built entity classes to represent our objects. Now that I've spent a couple months building part of this, I don't see us (me) ever going back to datasets.
Also, I wouldn't get hung up on L2S being "aimed at rapid development". This makes it sound like a prototyping tool. We're finding it to be an industrial strength tool. This might be contrary to what Microsoft might now be saying about it, since they would rather people use EF.
Randy
I recommend to take one step back and look at your requirements once again.
Do you need real 3 tiers (that is physical deployment to different machines) or just logical partitioning of your application?
I made exactly this mistake on the first big Application I have written. I never needed physical 3 tiers (and will never need) but designed the application that way. The most striking consequence was that I Linq2Sql does not support disconnected Change Tracking on the entities. I used Linq2Sql Entity Base to workaround this limitation, however it violates the concept of Persistence Ignorance very badly (one always knows better afterwards, huh?).
Going real n-tiers has a lot of other implications on application architecture.
You will need message passing, data-transfer objects etc. Linq2SQL is a decent ORM, the tight integration with LINQ provides unique possibilities. Other ORMs will still need some time to catch up here. NHibernate 3.0 is a light at the end of the tunnel here.
Linq2SQL is a great ORM if you have simple data models and can map in a "class per table" manner.
For disconnected change tracking (which you will need if going n-tiers) other ORMs have better support.
And at last:
(we're coming from classic ASP so this
is a massive step for us)
Under these circumstances I'd be especially careful. Switching technologies is often underestimated. Even the smartest programmers on your team will make wrong decisions because they lack experience with the technology. Nonetheless it is important to go new ways and improve your skill set. Those who never fail will never suceed.
I would say L2S is the DAL. L2S + business logic in separate classes becomes a merged DAL+BLL, the DAL side being the L2S runtime, and the L2S-generated code (datacontext, entity classes etc).
You can still easily separate them so that the L2S-generated part, and any extensions to the entities and datacontext are in a separate DLL, and additional business logic in a separate dll/service/etc. However in many cases there is no real need to separate them.
One reason to separate into DAL+BLL when using L2S would be if you foresee that you will move to another data access technology down the road, or if you may be using more than one data access technology. Having a separate DAL with any L2S-specific stuff separate should make it easier to switch out the DAL. If you want to separate DAL+BLL for that reason, the L2S DAL-DLL should expose the entity classes, any derived classes or projection classes, and methods to get entities or collections (List etc), but keep the DataContext internal to the DAL class to avoid having L2S-specific things (L2S queries etc) trickle into the BLL.
JMHO.
Since others mentioned L2S tools, here is a more complete summary of what's out there: http://www.thinqlinq.com/post.aspx/title/linq-tools
IMHO, LINQ to SQL is the best choice available currently. It really makes working with data painless and almost fun. :-) If you are interested in LINQ to SQL, I'd take a look at our PLINQO project. It has some great enhancements to LINQ to SQL to make it a better overall solution.
I think with linq the DAL and BLL concepts are no longer meaningful. So I placed linq classes and
some getters & setters under the 'Domain' folder under the Code (parent) folder. Then I created 'Repository' classes and 'FontEnd' classes.
I have been playing with some LINQ ORM (LINQ directly to SQL) and I have to admit I like its expressive powers . For small utility-like apps, It also works quite fast: dropping a SQL server on some surface and you're set to linq away.
For larger apps however, the DAL never was that big of an issue to me to setup, nor maintain, and more often than not, once it was set, all the programming was not happening there anyway...
My, honest - I am an ORM newbie - question : what is the big advantage of ORM over writing a decent DAL by hand?
(seems like a double, couldn't find it though)
UPDATE : OK its a double :-) I found it myself eventually :
ORM vs Handcoded Data Access Layer
Strong-typing
No need to write the DAL yourself => time savings
No need to write SQL code yourself =>
less error-prone
I've used Hibernate in the past to dynamically create quite complex queries. The logic involved to create the appropriate SQL would have been very time-consuming to implement, compared with the logic to build the appropriate Criteria. Additionally, Hibernate knew how to work with various different databases, so I didn't need to put any of that logic in our code. We had to test against different databases of course, and I needed to write an extension to handle "like" queries appropriately, but then it ran against SQL Server, Oracle and HSqldb (for testing) with no issues.
There's also the fact that it's more code you don't have to write, which is always a nice thing :) I can't say I've used LINQ to SQL in anything big, but where I've used it for a "quick and dirty" web-site (very small, rarely updated, little benefit from full layer abstraction) it was lovely.
I used JPA in a project, and at first I was extremely impressed. Gosh it saved me all that time writing SQL! Gradually, however, I became a bit disenchanted.
Difficulty defining tables without surrogate keys. Sometimes we need tables that don't have surrogate keys. Sometimes we want a multicolumn primary key. TopLink had difficulties with that.
Forced datastructure relationships. JPA uses annotations to describe the relationship between a field and the container or referencing class. While this may seem great at first site, what do you do when you reference the objects differently in the application? Say for example, you need just specific objects that reference specific records based on some specific criteria (and it needs to be high-performance with no unnecessary object allocation or record retrieval). The effort to modify Entity classes will almost always exceed the effort that would have existed had you never used JPA in the first place (assuming you are at all successful getting JPA to do what you want).
Caching. JPA defines the notion of caches for your objects. It must be remembered that the database has its own cache, typically optimized around minimizing disk reads. Now you're caching your data twice (ignoring the uncollected GC heap). How this can be an advantage is beyond me.
Data != Objects. For high-performance applications, the retrieval of data from the DB must be done very efficiently. Forcing object creation is not always a good thing. For example, sometimes you may want arrays of primitives. This is about 30 minutes of work for an experienced programmer working with straight JDBC.
Performance, debugging.
It is much more difficult to gauge the performance of an application with complex things going on in the (sub-optimal, autogenerated) caching subsystem, further straining project resources and budgets.
Most developers don't really understand the impedence mismatch problem that has always existed when mapping objects to tables. This fact ensures that JPA and friends will probably enjoy considerable (cough cough) success for the forseeable future.
Well, for me it is a lot about not having to reinvent/recreate the wheel each time I need to implement a new domain model. It is simply a lot more efficient to use for instance nHibernate (my ORM of choice) for creating, using and maintaining the data access layer.
You don't specify exactly how you build your DAL, but for me I used to spend quite some time doing the same stuff over and over again. I used to start with the database model and work my way up from there, creating stored procedures etc. Even if I sometimes used little tools to generate parts of the setup, it was a lot of repetitive coding.
Nowadays I start with the domain. I model it in UML, and for most of the time I'm able to generate everything from that model, including the database schema. It need a few tweaks here and there, but with my current setup I get 95% of the job with the data access done in no time at all. The time I save I can use to fine tune the parts that need tuning. I seldom need to write any SQL statements.
That's my two cents. :-)
Portability between different db vendors.
My, honest - i am an ORM newbie - question : what is the big advance of ORM over writing a decent DAL by hand?
Not all programmers are willing or even capable of writing "a decent DAL". Those who can't or get scared from the mere thought of it, find LINQ or any other ORM a blessing.
I personally use LINQ to manipulate collections in the code because of its expressiveness. It offers a very compact and transparent way to perform some common tasks on collections directly in code.
LINQ will stop being useful to you when you will want to create very specific and optimized queries by hand. Then you are likely to get a mixture of LINQ queries intermingled with custom stored procedures wired into it. Because of this considerations, I decided against LINQ to SQL in my current project (since I have a decent (imho) DAL layer). But I'm sure LINW will do just fine for simple sites like maybe your blog (or SO for that matter).
With LINQ/ORM there may also be a consideration of lagging for high traffic sites (since each incoming query will have to be compiled all over again). Though I have to admit I do not see any performance issues on SO.
You can also consider waiting for the Entity Framework v2. It should be more powerful than LINQ (and hopefully not that bad as v1 (according to some people)).
Transparent persistence - changes get saved (and cascaded) without you having to call Save(). At first glance this seems like a nightmare, but once you get used to working with it rather than against it, your domain code can be freed of persistence concerns almost completely. I don't know of any ORM other than Hibernate / NHibernate that does this, though there might be some...
The best way to answer the question is to understand exactly what libraries like Hibernate are actually accomplishing on your behalf. Most of the time abstractions exist for a reason, often to make certain problems less complex, or in the case Hibernate is almost a DSL for expression certain persistance concepts in a simple terse manner.
One can easily change the fetch strategy for collections by changing an annotation rather than writing up lots of code.
Hibernate and Linq are proven and tested by many, there is little chance you can achieve this quality without lots of work.
Hibernate addresses many features that would take you months and years to code.
Also, while the JPA documentation says that composite keys are supported, it can get very (very) tricky quickly. You can easily spend hours (days?) trying to get something quite simple working. If JPA really makes things simpler then developers should be freed from thinking too much about these details. It doesn't, and we are left with having to understand two levels of abstraction, the ORM (JPA) and JDBC. For my current project I'm using a very simple implementation that uses a package protected static get "constructor" that takes a ResultSet and returns an Object. This is about 4 lines of code per class, plus one line of code for each field. It's simple, high-performance, and quite effective, and I retain total control. If I need to access objects differently I can add another method that reads different fields (leaving the others null, for example). I don't require a spec that tells me "how ORMs must (!) be done". If I require caching for that class, I can implement it precisely as required.
I have used Linq, I found it very useful. I saves a lot of your time writing data access code. But for large applications you need more than DAL, for them you can easily extent classes created by it. Believe me, it really improves your productivity.