If I write a large SQL statement with many group by clauses and so on; would it be much faster with normal SQL (maybe a stored procedure), or is Linq only parsing it to a very nice SQL statement and gives me my results quite fast?
In some cases you may be able to tune the SQL better than LINQ to SQL... but LINQ really is running SQL. It's not fetching all the data into the process and then doing the processing. You can (and should) log what SQL is being generated and profile anything that looks suspicious.
Of course, there's the overhead of converting the query into SQL to start with (which is why you're able to precompile them) and then there's the overhead of converting the data into objects - and keeping track of the IDs etc. In my experience this is usually not a significant overhead though. As ever, profile your code...
Related
We must create and show at runtime (asp.net mvc) some complex reports from Oracle tables data with millions of records. The reports data must be obtained from groupings and little complex calculations.
So is it better for performance and maintainability of code that do these groupings and calculations via sql query (pl/sql) or via linq?
Thanks for your kindle reply
So is it better for performance and maintainability of code that do
these groupings and calculations via sql query (pl/sql) or via linq?
It depends on what you mean by via linq. If you mean that you fetch the complete table to local memory and then use linq statements to extract the result that you want, then of course SQL statements are faster.
However, if you mean that you use Entity Framework, or something similar, then the answer is not a easy to give.
If you use Entity Framework (or some clone), your tables will be represented by IQueryable<...> instead of IEnumerable<...>. An IQueryable has an Expression and a Provider. The Expression represents the query that must be performed. The Provider knows which system must execute the query (usually a Database Management System) and how to communicate with this system. When the query must be executed, it is the task of the Provider to translate the Expression into the language that the system knows (usually something SQL-like) and to execute the SQL-query.
There are two kinds of IQueryable LINQ statements: those that return an IQueryable<...> of something, and those that return a TResult. The ones that return IQueryable only change the Expression. They are functions that use deferred execution.
Function that do not return an IQueryable, are ToList(), FirstOrDefault(), Any(), Max(), etc. Internally they will call functions that will GetEnumerator() (usually via a foreach), which orders the Provider to translate the Expression and execute the query.
Back to your question
So which one is more efficient, entity framework or SQL? Efficiency is not only the time to perform the queries, it is also the development/testing time, for the first version and for future changes in the software.
If you use an entity-framework (-clone), the SQL-queries created from the Expressions are pretty efficient, depending on the framework manufacturer. If you look at the code, then sometimes the SQL query is not the optimal one, although you'll have to be a pretty good SQL-programmer to improve most queries.
The big advantage above using Entity Framework and LINQ queries above SQL statements is that development times will be shorter. The syntax of the LINQ statements is checked at compile time, SQL statements at run-time. Development and test periods will be shorter.
It is easy to reuse LINQ statements, while SQL statements almost always have to be written especially for the query you want to execute. LINQ statements can be tested without a database on any sequence of items that represent your tables.
My Advice
For most queries you won't notice any difference in execution time between the entity framework query or the SQL query.
If you expect complicated queries and future changes, I'd go for entity framework. With main argument the shorter development time, the better testing possibilities, and the better maintainability.
If you detect some queries where you notice that the execution time is too long, you can always decide to bypass entity framework by executing a SQL query instead of using LINQ.
If you've wrapped your DbContext in a proper repository, where you hide the use cases from their implementations, the users of your repository won't notice the difference.
I have a database(sql server 2005),now there are about 100000 records in the table called users, when I do query use linq to sql, it is very slower and slower.how can I do some operate to improve the speed?
Analyse your query and add some indexes to your table may help.
To get a more specific answer post more specific information (table stucture, indexes you have, the sql code L2S generates, ...)
You could (in order of preference)
Save your query as a stored procedure
Add indexes to your users
table, for what you are querying for/sorting for
Analyze your query
(if it is complicated), see if there's a less-resource-intensive way
of doing it. There are graphical query analyzers to help you.
As a last resort, not use LINQ, but instead ADO.NET Entity Framework, it's significantly faster. But you'll only see performance improvements for crazy stuff, and only if you've already done all of the above.
Use stored procedures and then use linq to sql to get the desired rows, this will give performance.
The best tools at your disposal for analyzing your database access and seeing what needs to be optimized are:
SQL Server Profiler
Graphical Execution Plans
The first one will allow you to see the exact queries being sent to your database from your application, which is especially useful if it turns out that your application is chattier than you think. The second one will allow you to take those queries and see exactly what the SQL server is doing with them.
In the graphical execution plan, look for steps which use a lot of CPU and paths which transfer a lot of records. Those are what you'll want to optimize. It's possible that you're doing a table scan somewhere, which is slow, or maybe joining on many more records than you need somewhere, which is slow, etc.
While developing applications, I usually go for Stored Procedures to contain CRUD logic, so as improve performance and maintainability. But after experimenting with LINQ to SQL, I was wondering whether, using compiled LINQ-to-SQL queries over stored procedures will that help improve performance?
LINQ to SQL will not improve your performance, because you will be sending each CRUD operation as a string over the wire.
Performance will still be better with Stored Procedures, but ORM's like Linq to SQL usually make development time faster.
From my experience, I can rank performance as following:
Stored procedures
Native queries (using DBCommand)
Linq to entity (compiled query, EF4)
Linq to SQL (compiled)
Linq to entity (not compiled EF4)
Linq to SQL
ESQL
2,3,4 may change their order depends on the nature of the queries, but in general raw sql query is executed fater.
Based on your comments to both DevSlick and a1ex07, it seems you have a fundamental misunderstanding of what LINQ is. In order for LINQ queries to allow chaining, like
var activePeople = peopleList.Where(o => o.Active).OrderBy(o => o.Ordering).Select(o => o.Name);
the execution of the LINQ query must be delayed until it is enumerated:
foreach(var person in activePeople)
{
//If this is LINQ-to-SQL, the query to peopleList has waited until now to request anything from the database
}
This means that the query .Where(o => o.Active).OrderBy(o => o.Ordering).Select(o => o.Name) is not actually interpreted by your computer until that point as well. If you run the same query 100 times, that means the computer has to reinterpret that query 100 times. For LINQ-to-SQL, that means translating the query to SQL 100 times before that SQL is sent to the database each time, even if the SQL is exactly the same every time.
Compiling the query ahead of time causes it to generate the SQL only once, and use that SQL every time the query is called. This has nothing to do with stored procedures - you would compile a query-to-a-stored-procedure in the same way that you would compile any other query. Asking "which gives better performance" is meaningless, as they are not mutually exclusive.
Though compiling a query sounds like a good thing, in practice interpreting a LINQ query (usually called "evaluating the expression tree") takes very very little time compared to actually executing the SQL against the database, so you get very little benefit for compiling the query. In the meanwhile, the syntax for compiling a query is atrocious:
static readonly Func<AdventureWorksEntities, Decimal, IQueryable<SalesOrderHeader>> s_compiledQuery2 =
CompiledQuery.Compile<AdventureWorksEntities, Decimal, IQueryable<SalesOrderHeader>>(
(ctx, total) => from order in ctx.SalesOrderHeaders
where order.TotalDue >= total
select order);
var orders = s_compiledQuery2.Invoke(context, totalDue);
For this reason, it is usually recommended to simply not compile your LINQ-to-SQL queries, because the ratio of code-noise-to-benefit is terrible.
I've just started using LINQ to SQL on a mid-sized project, and would like to increase my understanding of what advantages L2S offers.
One disadvantage I see is that it adds another layer of code, and my understanding is that it has slower performance than using stored procedures and ADO.Net. It also seems that debugging could be a challenge, especially for more complex queries, and that these might end up being moved to a stored proc anyway.
I've always wanted a way to write queries in a better development environment, are L2S queries the solution I've been looking for? Or have we just created another layer on top of SQL, and now have twice as much to worry about?
Advantages L2S offers:
No magic strings, like you have in SQL queries
Intellisense
Compile check when database changes
Faster development
Unit of work pattern (context)
Auto-generated domain objects that are usable small projects
Lazy loading.
Learning to write linq queries/lambdas is a must learn for .NET developers.
Regarding performance:
Most likely the performance is not going to be a problem in most solutions. To pre-optimize is an anti-pattern. If you later see that some areas of the application are to slow, you can analyze these parts, and in some cases even swap some linq queries with stored procedures or ADO.NET.
In many cases the lazy loading feature can speed up performance, or at least simplify the code a lot.
Regarding debuging:
In my opinion debuging Linq2Sql is much easier than both stored procedures and ADO.NET. I recommend that you take a look at Linq2Sql Debug Visualizer, which enables you to see the query, and even trigger an execute to see the result when debugging.
You can also configure the context to write all sql queries to the console window, more information here
Regarding another layer:
Linq2Sql can be seen as another layer, but it is a purely data access layer. Stored procedures is also another layer of code, and I have seen many cases where part of the business logic has been implemented into stored procedures. This is much worse in my opinion because you are then splitting the business layer into two places, and it will be harder for developers to get a clear view of the business domain.
Just a few quick thoughts.
LINQ in general
Query in-memory collections and out-of-process data stores with the same syntax and operators
A declarative style works very well for queries - it's easier to both read and write in very many cases
Neat language integration allows new providers (both in and out of process) to be written and take advantage of the same query expression syntax
LINQ to SQL (or other database LINQ)
Writing queries where you need them rather than as stored procs makes development a lot faster: there are far fewer steps involved just to get the data you want
Far fewer strings (stored procs, parameter names or just plain SQL) involved where typos can be irritating; the other side of this coin is that you get Intellisense for your query
Unless you're going to work with the "raw" data from ADO.NET, you're going to have an object model somewhere anyway. Why not let LINQ to SQL handle it for you? I rather like being able to just do a query and get back the objects, ready to use.
I'd expect the performance to be fine - and where it isn't, you can tune it yourself or fall back to straight SQL. Using an ORM certainly doesn't remove the need for creating the right indexes etc, and you should usually check the SQL being generated for non-trivial queries.
It's not a panacea by any means, but I vastly prefer it to either making SQL queries directly or using stored procs.
I must say they are what you have been looking for. It takes some time getting used to it, but once you do you can't think of going back (at least for me).
Regarding linq vs. stored procedures, you can have poor performance on either if you build it wrong. I moved to linq to sql some stored procedures of a client that were awfully coded, so the time dropped from 20secs (totally unaceptable for a web app) to < 1 sec. And much much less code then the stored procedure solution.
Update 1: Also you get a lot of flexibility, as you can limit the columns of what you are getting and it will actually only retrieve that. On the stored procedure solution you have to define a procedure for each column set you are getting, even if the underlying queries are the same.
Just as an update, here are some links on the future of LINQ to SQL:
What is the Future of Linq to SQL
Has Microsoft confirmed their stance on LINQ to SQL end-of-life?
Is LINQ to SQL Dead or Alive?
As a comment in the last link states, LINQ to SQL isn't going to go away, just not "improved upon" at least by Microsoft. Take these comments and posts as you will, just be cautious in your development plans.
We switched over to LINQ2Entity over the Entity Framework environment recently. Before, we had basic SQLadapters. Since the database we are working with is rather small, I cannot comment on the performance of LINQ.
I must admit though, writing queries have become a lot easier, and the addition of Entities, does enable strong typing.
I am creating a data source for reporting model (SQL Server Reporting Services).
The reports requires a lot of joins and calculations (let's say, calculating financial parameters like money spent on this, that, amount A vs amount B)...all this involves subobjects.
It makes a lot of sense to me to write unit tests for this code (i.e. walking through order collection, aggregating info based on business rules and subobjects etc.)
To do this properly, I would expect my code to look approx. like this
foreach (IOrder in Orders)
foreach (IOrderLine in IOrder.Orderlines)
...
return ...
and then test the return value.
But this code is not the SQL which is going to be used in the reporting view...of course...
So I am thinking, I could plug-in a .NET assembly in the database.
The issue here is, of course, performance...I don't want to loop all these objects in C#...too slow.
So, naturally, Linq/Lambda/Expression trees seem to be the answer to me.
As we know, when you are doing Linq to SQL, expression trees are built, and then proper SQL is generated based on them.
So, I could write my code in Linq to Objects, using lambda expressions, unit test this code on sample collections (having expressions compiled to .net), and reuse the same code as Linq to SQL in the DB stored procedure, so that inside SQL Server it would generate proper SQL for me (as Linq to SQL already does)...
Then I could get benefits of both unit-tests and writing domain logic code in C# and high-performing stored procedures for reports.
Possible? Can I use Linq/Lambda in SQL Server CLR Stored procedures? Anyone did it or knows how to make it work?
Am I crazy? Do you know a better way of doing it?
Thanks
P.S. I think now I figured out how this should be done properly. According to Udi Dahan, if I understand him right. Database should be denormalized, and all the calculated fields should be on the objects in the table.
When something is happening on the subobject (OrderLine added), my Customer object should receive an event and recalculate the smart value (cache it and persist).
Then reports go straight-forward, without logic and work fast...
No, you cannot use LINQ/Lambda in SQL CRL Procs - it is based on a different version of .NET and does not support those namespaces.
So, I could write my code in Linq to
Objects, using lambda expressions,
unit test this code on sample
collections (having expressions
compiled to .net), and reuse the same
code as Linq to SQL in the DB stored
procedure, so that inside SQL Server
it would generate proper SQL for me
(as Linq to SQL already does)...
This plan was fine until you suggested the CLR code be called from your stored procs. Running CLR code from the database process itself creates a lot of problems with regards to versioning, configuration and database stability... Too many problems if you do that.
Your motivation was to have the benefit of using stored procs, which are faster in general. If those stored procs are in turn running CLR code, they're not going to be faster than the CLR code running in the local process.
Using the LINQ generated expressions technically consumes more CPU cycles than stored procs. This is because the database engine has to regenerate the execution plan each time a query is ran. Typically your database server is on a separate machine though that is not CPU bound (it will be limited instead by disk or network capacity), so this is not a real performance issue. It could be if you run the database server on the same machine as everything else, but don't try to fix this with something so convoluted until its a real issue.
Udi's suggestion may be appropriate, if you want to decrease the overhead of generating the reports. There are two important side effects though to consider first. First, can afford to increase the performance overhead of the operations that pregenerate the reported fields? A bigger problem is that it couples your reporting logic with the code that runs the target system. This prevents you from being able to update the reporting code without also updating the business code, and presumes the reporting code being running as soon as the reported code is put into production.