Given a straightforward user-driven, high traffic web application (no fancy reporting/BI):
If my utmost goal is performance (not ease of maintainability, ease of queryability, etc) I would surmise that in most cases, a roll-yourown DAL would be the best choice.
However, if i were to choose Linq2SQL or NHibernate, roughly what kind of performance hit would we be talking about? 10%? 20%? 200%? Of the two, which would be faster?
Does anyone have any real world numbers that could shed some light on this? (and yes, I know Stackoverflow runs on Linq2SQL..)
If you know your stuff (esp. in SQL and ADO.NET), then yes - most likely, you'll be able to create a highly tweaked, highly optimized custom DAL for your particular interest and be faster overall than a general-purpose ORM like Linq-to-SQL or NHibernate.
As to how much - that's really really hard to say without knowing your concrete table structure, data and usage patterns. I remember Rico Mariani did some Linq-to-SQL vs. raw SQL comparisons, and his end result was that Linq-to-SQL achieve over 90% of the performance of a highly skilled SQL programmer.
See: http://blogs.msdn.com/ricom/archive/2007/07/05/dlinq-linq-to-sql-performance-part-4.aspx
Not too shabby in my book, especially if you factor in the productivity gains you get - but that's the big trade-off always: productivity vs. raw performance.
Here's another blog post on Entity Framework and Linq-to-SQL compared to DataReader and DataTable performance.
I don't have any such numbers for NHibernate, unfortunately.
In two high traffic web apps refactoring a ORM call to use a stored procedure from ado.net only got us about 1-2% change in CPU and time.
Going from an ORM to a custom DAL is an exercise in micro optimization.
Related
I would appreciate it very much if you can help me with my questions:
Is EF5 reliable and efficient enough to deal with very large and complex dataset in the real world?
Comparing EF5 with ADO.NET, does EF5 requires significantly more resources such as memory?
For those who have tried EF5 on a real world project with very large and complex dataset, are you happy with the performance so far?
As EF creates an abstraction over the data access. Usage of EF introduces number of additional steps to execute any query. But there are workarounds to reduce the cost. As MS is promoting this ORM, i believe they are doing alot for performance improvement as well. EF 6 beta is already out.
There is good article on performance of EF5 available on MSDN for this.
http://msdn.microsoft.com/en-us/data/hh949853
I will not use EF if lot of complex manipulations and iterations are required on the DBsets after populating the DBSets from the EF query.
Hope this helps.
EF is more than capable of handling large amounts of data. Just as in plain ADO.NET, you need to understand how to use it correctly. Its just as easy to write code in ADO.NET that performs poorly. Its also important to remember that EF is built on top of ADO.NET.
DBSets will be much slower with large amounts of data than a code first EF approach. Plain Datareaders could be faster if done correctly.
The correct answer is 'profile'. Code some large objects and profile the differences.
I did some research into this on EF 4.1 and some details might still apply though there have been upgrades in performance to EF5 to keep in mind.
ADO vs EF performance
My conclusions:
-You won't match ADO performance with a framework that has to generate the actual SQL statement dynamically from C# and turn that sql statement into a strongly typed object, there is simply too much going on, even with pre compiled and 'warmed' statements (and performance tests conclude this). This is a happy trade off for many who find it much easier to write and maintain Linq in C# than stored procs and mapping code.
-You can still use stored procedures with equal performance to ADO which to me is the perfect situation. You can write linq to sql in situations where it works, and as soon as you would like more performance, write the sproc and import it to EF to enjoy the best performance possible.
-There is no technical reason to avoid EF accept for the requirements of your project and possibly your teams knowledge and comfort level. Be sure to make sure it fits into your selected design pattern.. EF Anti Patterns to avoid
I hope this helps.
Will usage of LINQ increase day by day or is it that some organizations do not like to use it?
Linq allows you to simplify your code which is always good, as it makes code less fragile and easier to maintain - as long as your intent (as the developer) is clear.
In my experience projects are only light on Linq usage if the development team don't understand the technology fully, or feel that it doesn't fit into their naive views on proper 'OO' architectures and patterns.
This is highly objective and depends on context, but I would say absolutely. If you've built medium sized application both with and without an ORM you will quickly understand the immense benefits LINQ affords. It's hard to imagine an organization building subsequent applications without an ORM in conjunction with LINQ.
Historically I've been completely against using ORMS for all but the most basics applications.
My reasoning has and always has been that it's a very leaky abstraction ... mostly because SQL provides a very powerful way to retreive data from a relational source which usually gets messed up by the ORM so that you lost a lot of performance to gain an appearance of not having a relational backend.
I've always thought the DATA should always be kept in the Data Base, not eat up application memory which won't scale anyway. In addition the performance hit of being to generic is harmful. For example, if I need the name and address of all the clients of my database SQL provides me with an easy way to get it, in one query. With an ORM I need to get all the clients and then each name and address, even if it's lazy loaded it's gonna take a LOT longer.
That's what I think but has any of the above changed? I'm seeing a lot of ORMS like the Entity Framework, NHibernate, etc. And they seem to have a lot of popularity lately... Are they worth it? Do they solve the problems I describe above??
Please read: All Abstractions Are Failed Abstractions It should put a lot of your questions in perspective.
Performance is usually not an issue with ORM - and if you really find yourself in a situation where it is, then there usually is always the option to handcraft the SQL statements the ORM uses.
IMHO ORM give you an instant and huge development speed increase. That's why they are so popular. And using them right does not make you paint yourself in a corner. There is always the option of hand tuning the performance.
Edit:
Even though Jeff focuses on Linq to SQL all he says about abstractions and performance are equally true for NHibernate (which I know from years of real world app development). IMHO one should use by default an ORM since they are more than fast enough for the notorious 90% of situations. Reading code written for an ORM usually is more maintainable and readable especially when your code is picked up by the next developer that inherits your code. Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live. Never forget about that guy!
In addition they give out of the box caching, lazy loading, unit of work, ... you name it. And I found that when I was not happy about the performance of the ORM it was MY fault. ORM do force you to adhere to good OO design practices and help you shape your Domain Model.
On the Ruby on Rails side, ActiveRecord -- essentially an ORM -- is the basis of 95% of Rails applications (made-up statistic, but it's around there). Actually, to get to that 95% we would probably need to include other ORMs for Rails, like DataMapper.
The abstraction is leaky, and a developer can always dip down to SQL as necessary. Even when you're not using SQL directly, you have to think about number of database hits, etc. For instance, in ActiveRecord, "eager loading" is used to avoid multiple database hits, so you see stuff like this (includes the related "author" field of each Post in the initial query... it does a join under the hood, I think)
for post in Post.find(:all, :include => :author)
The point is that the abstraction leaks as do all abstractions, but that's not really the point. To decide whether to use the abstraction or not, you have to consider whether it will add to or reduce your general workload. In other words, will you spend more time retrofitting your concepts to make the abstraction work, or is it ready to do what you need without much hacking (saving you time)?
I think that the abstractions that work are those that are mature: ActiveRecord has been around the block a ton (as has Hibernate), so it provides an abstract way to patch most of the leaks you would normally be worried about, without explicitly rolling your own lower-level solution (i.e., without writing SQL).
Beyond the learning curve, I think that ORMs are an amazing time-saver for most of your database access, and that most apps actually do make quite "normal" use of the DB. While it may not be your case whatsoever, eschewing an ORM for direct DB access is often a case of early, and unnecessary, optimization.
Edit: I hadn't seen this, but the Jeff quote is
Does this abstraction make our code at
least a little easier to write? To
understand? To troubleshoot? Are we
better off with this abstraction than
we were without it?
saying essentially the same thing.
Some of the more modern ORM's are really powerful tools that solve a lot of real world problems. The good ORM's don't try to hide the relational model from you, but actually leverage it to make OO programming more powerful. They really aren't abstractions in the sense that they let you ignore the "lowlevel" details of relational algebra, instead they are toolkits that let you build abstractions on the relational model and make it easier to bring in data into the imperative model, track the changes and push them back to the database. The SQL language really doesn't provide any good way to factor out common predicates into composable, reusable components to achieve businesstule level abstractions.
Sure there is a performance hit, but it's mostly a constant factor thing as you can make the ORM issue what ever SQL you would issue yourself. Like for your name and address example, in SQLAlchemy you'd just do
for name, address in session.query(Client.name, Client.address):
# process data
and you're done. But where the ORM helps you is when you have reusable relations and predicates. For instance, say you have defined a way to join to a client's favorited items, and a predicate to see if it is on sale. Then you can get the list of clients that have some of their favorite items on sale while also fetching the assigned salesperson with the following query:
potential_sales = (session.query(Client).join(Client.favorite_items)
.filter(Item.is_on_sale)
.options(eagerload(Client.assigned_salesperson)))
Atleast for me, the intent of the query is a lot faster to write, clearer and easier to understand when written like this, instead of a dozen lines of SQL.
As to any abstraction, you'll have to pay either in form of performance, or leaking. I agree with you in being against ORM's, since SQL is a clean and elegant language. I've sort of written my own little frameworks which do this things for me, but hey, then I sat there with my own ORM (but with a little more control over it than for example Hibernate). The people behind Hibernate states that it is fast. It should be able to do about 95% of the boring work against your database (simple queries, updates etc..) but gives you freedom to do the last 5% yourself if you want (you could always write your own mappings in special cases).
I think most of the popularity stems from that many programmers are lazy and want established frameworks to do the dirty boring persistence job for them (I can understand that), but the price of an abstraction will always be there. I would consider my options thoroughly before choosing to use an ORM in a serious project.
would i get any performance gains if i replace my data access part of the application from nhiberate to straight ado.net ?
i know that NHibernate uses ado.net at its core !
Short answer:
It depends on what kind of operations you perform. You probably would get a performance improvement if you write good SQL, but in some cases you might get worse performance since you lose the NHibernate caching etc.
Long answer:
As you mention, NHibernate sits on top of ADO.NET and provides a layer of abstraction. This makes your life easier in many ways, but as all layers of abstraction it has a certain performance cost.
The main case where you probably would see a performance benefit is when you are operating on many objects at once, such as updating many entities or fetching a large amount of entities. This is because of the work that the NHibernate session does to keep track of which objects are modified etc. My experience is that the performance of NHibernate degrades significantly as the amount of entities in the session grows.
NHibernate has a lot of ways to improve performance and if you really know it well, you can get it to perform quite close to ADO.NET. However, if you are not that familiar with it, you can easilly shoot yourself in the foot, performance-wise. (Select N+1 problem, etc.)
There are some situations where you could actually get worse performance when switching form NHibernate to straight ADO.NET. This is because of the fact that the NHibernate abstraction layer introduces some features that can improve performance, such as caching. NHibernate also includes functionality for optimizing the generated SQL for the current database management system. For example, if you are using SQL Server it might generate slightly different SQL than if you are using Oracle.
It is worth mentioning that it does not have to be an all or nothing kind of situation. You could use NHibernate for the 90% of your database access for which it works great, and then use straight SQL for the 10% where you do complex queries, batch inserts/updates etc.
I have search the internet high and low looking for any performance information for LLBLGen Pro. None found. Just wanted to know how does LLBLGen Pro perform compared the Nhibernate. Thanks
Your question is essentially impossible to answer without context. The questions I would ask in response would start with:
What kind of application? Data-centric? Business logic-centric?
How much data are we talking about?
What kind of data operations are we talking about? Mostly reads? Mostly writes?
As a general matter, LLBLGen performs very well. We have used it on 10+ projects (including a few enterprise-scale projects) where I work, and the few issues we've seen with performance were always the result of misunderstand what the code was doing (there is a learning curve) or a poorly implemented physical model (e.g. missing indexes).
The two frameworks approach the problem of data access very differently. LLBLGen's operations generally translate into SQL that is fairly easy to understand if you have a strong data background. NHibernate uses sessions and cache to keep data in memory when possible to improve performance (disclaimer: I am not an NHibernate expert). LLBLGen does not support this sort of concept; instead it works in a disconnected state and stores change tracking information directly on its entity objects.
Bottom line, the approaches the frameworks take are very different, and it's hard to predict which will perform better without knowing more about what your system does. In the right hands, either framework can be used to design a system where data access performance is not a major performance bottleneck.
Initially we tested LLBLGen # ORMBattle.NET, it was ~ 2 times faster than NH on materialization; LINQ query compilation time was pretty good (~ 4000 queries/sec.), but CUD operations were noticeably slower than in NH (there is no CUD batching in LLBLGen).
Both frameworks must be relatively slow in case when you deal with large amount of objects in the same session:
NH is relatively slow because of its materialization pipeline. I'm not fully sure why, but e.g. to implement dirty checking, NH must store a clone of any materialized objects somewhere. At least two times more RAM ~= at least 2 times slower.
LLBLGen uses relatively "fat" entities - it seems they store fields in dictionaries. Obviously, this isn't good from the point of performance, since RAM consumption is one of essential factors affecting on it.
See this FAQ question and Test Suite Summary for a bit deeper explanation.
So in short, LLBLGen Pro must be faster than NH on reads, but slower on writes.