What are the best practices for determining when (or not) to pre-compile a Linq Query? [duplicate] - linq

I have a table:
-- Tag
ID | Name
-----------
1 | c#
2 | linq
3 | entity-framework
I have a class that will have the following methods:
IEnumerable<Tag> GetAll();
IEnumerable<Tag> GetByName();
Should I use a compiled query in this case?
static readonly Func<Entities, IEnumerable<Tag>> AllTags =
CompiledQuery.Compile<Entities, IEnumerable<Tag>>
(
e => e.Tags
);
Then my GetByName method would be:
IEnumerable<Tag> GetByName(string name)
{
using (var db = new Entities())
{
return AllTags(db).Where(t => t.Name.Contains(name)).ToList();
}
}
Which generates a SELECT ID, Name FROM Tag and execute Where on the code. Or should I avoid CompiledQuery in this case?
Basically I want to know when I should use compiled queries. Also, on a website they are compiled only once for the entire application?

You should use a CompiledQuery when all of the following are true:
The query will be executed more than once, varying only by parameter values.
The query is complex enough that the cost of expression evaluation and view generation is "significant" (trial and error)
You are not using a LINQ feature like IEnumerable<T>.Contains() which won't work with CompiledQuery.
You have already simplified the query, which gives a bigger performance benefit, when possible.
You do not intend to further compose the query results (e.g., restrict or project), which has the effect of "decompiling" it.
CompiledQuery does its work the first time a query is executed. It gives no benefit for the first execution. Like any performance tuning, generally avoid it until you're sure you're fixing an actual performance hotspot.
2012 Update: EF 5 will do this automatically (see "Entity Framework 5: Controlling automatic query compilation") . So add "You're not using EF 5" to the above list.

Compiled queries save you time, which would be spent generating expression trees. If the query is used often and you'll save the compiled query, you should definitely use it. I had many cases when the query parsing took more time than the actual round trip to the database.
In your case, if you are sure that it would generate SELECT ID, Name FROM Tag without the WHERE case (which I doubt, as your AllQueries function should return IQueryable and the actual query should be made only after calling ToList) - you shouldn't use it.
As someone already mentioned, on bigger tables SELECT * FROM [someBigTable] would take very long and you'll spend even more time filtering that on the client side. So you should make sure that your filtering is made on the database side, no matter if you are using compiled queries or not.

compiled queries are more helpfull with linq queries with large expression trees say complex queries to gain performance over building expression tree again and again while reusing query. in your case i guess it will save a very little time.

Compiled queries are compiled when the application is compiled and every time you reuse a query often or it is complex you should definitely try compiled queries to make execution faster.
But I would not go for it on all queries as it is a little more code to write and for simple queries it might not be worthwhile.
But for maximum performance you should also evaluate Stored Procedures where you do all the processing on the database server, even if Linq tries to push as much of the work to the db as possible you will have situations where a stored procedure will be faster.

Compiled queries offer a performance improvement, but it's not huge. If you have complex queries, I'd rather go with a stored procedure or a view, if possible; letting the database do it's thing might be a better approach.

Related

Is better Linq or SQL query for complex calculations and aggregations?

We must create and show at runtime (asp.net mvc) some complex reports from Oracle tables data with millions of records. The reports data must be obtained from groupings and little complex calculations.
So is it better for performance and maintainability of code that do these groupings and calculations via sql query (pl/sql) or via linq?
Thanks for your kindle reply
So is it better for performance and maintainability of code that do
these groupings and calculations via sql query (pl/sql) or via linq?
It depends on what you mean by via linq. If you mean that you fetch the complete table to local memory and then use linq statements to extract the result that you want, then of course SQL statements are faster.
However, if you mean that you use Entity Framework, or something similar, then the answer is not a easy to give.
If you use Entity Framework (or some clone), your tables will be represented by IQueryable<...> instead of IEnumerable<...>. An IQueryable has an Expression and a Provider. The Expression represents the query that must be performed. The Provider knows which system must execute the query (usually a Database Management System) and how to communicate with this system. When the query must be executed, it is the task of the Provider to translate the Expression into the language that the system knows (usually something SQL-like) and to execute the SQL-query.
There are two kinds of IQueryable LINQ statements: those that return an IQueryable<...> of something, and those that return a TResult. The ones that return IQueryable only change the Expression. They are functions that use deferred execution.
Function that do not return an IQueryable, are ToList(), FirstOrDefault(), Any(), Max(), etc. Internally they will call functions that will GetEnumerator() (usually via a foreach), which orders the Provider to translate the Expression and execute the query.
Back to your question
So which one is more efficient, entity framework or SQL? Efficiency is not only the time to perform the queries, it is also the development/testing time, for the first version and for future changes in the software.
If you use an entity-framework (-clone), the SQL-queries created from the Expressions are pretty efficient, depending on the framework manufacturer. If you look at the code, then sometimes the SQL query is not the optimal one, although you'll have to be a pretty good SQL-programmer to improve most queries.
The big advantage above using Entity Framework and LINQ queries above SQL statements is that development times will be shorter. The syntax of the LINQ statements is checked at compile time, SQL statements at run-time. Development and test periods will be shorter.
It is easy to reuse LINQ statements, while SQL statements almost always have to be written especially for the query you want to execute. LINQ statements can be tested without a database on any sequence of items that represent your tables.
My Advice
For most queries you won't notice any difference in execution time between the entity framework query or the SQL query.
If you expect complicated queries and future changes, I'd go for entity framework. With main argument the shorter development time, the better testing possibilities, and the better maintainability.
If you detect some queries where you notice that the execution time is too long, you can always decide to bypass entity framework by executing a SQL query instead of using LINQ.
If you've wrapped your DbContext in a proper repository, where you hide the use cases from their implementations, the users of your repository won't notice the difference.

MS Access 2010: query slows down dramatically when using parameters

I hope this was not asked here before (I did search around here, and did google for an answer, but could not find an answer)
The problem is: I'm using MS Access 2010 to select records from a linked table (There are millions of records in the table). If I specify criteria (e.g. Date) directly (for example date=#1/1/2013#), the query returns in an instant. If i use parameters (add a parameter of type date/time and provide value of 1/1/2013 when prompted (or date in some different format), or reference a control in a form), the query takes minutes to load.
Please let me know if You have any ideas on what could be causing this. I do feel bad about asking such a question and possibly wasting someones time...
Here's a potential answer, I didn't know this myself and did a little digging.
If performance is important, it may be necessary to prefer dynamic SQL even for where parameter queries are suitable due to how queries are optimized. Generally, Access creates a plan for a new query upon saving. When a query contains a parameter, then Access cannot know what value the parameter may contain and has to make a "good guess". Depending on which actual values are later supplied, it may be okay or poor, resulting in sub-optimal performance. In contrast, dynamic SQL sidesteps this because the "parameters" are hard-coded into the temporary string and thus a new plan is compiled with that value, guaranteeing optimal execution plan. Since compiling a new plan at runtime is very fast, it can be the case that dynamic SQL will outperform parameter queries.
Source: http://www.utteraccess.com/wiki/index.php/Parameter_Query#Performance
Also, if I had to guess, in your parameter query, Access is requesting the ENTIRE table from Oracle and then filtering down with your where clause, but when the WHERE clause is specified, it actually just loads those records and possibly makes use of indexes.
As far as a solution, I would build your query string in VBA then execute it. It opens you up to injection, but you can handle that. So:
Instead of using a saved parameter query object in Access, try to do something like this.
dim qr as string
qr = "SELECT * FROM myTable WHERE myDate = #" & me.dateControl & "#;"
'CurrentDb.execute qr, dbFailOnError
Docmd.RunSQL qr
Or, as you replied, currentdb.openrecordset(qr)
This would force the engine to make an execution plan at runtime rather than having a saved potentially suboptimal plan. Let me know if this works out for you, I'd be interested to see.
Of course the above reference about using parameters with Access (JET/ACE) ONLY applies to access back ends, not ODBC ones like SQL server or oracle. Since you pointed out that your using Oracle here then creating a view or using a pass-though query would and should resolve this performance issue. However one does NOT want to use Access/JET paramters with data coming from some server based system - you best just send the server SQL strings, but much better would be to use a pass-though query. If the result set requires editing, then PT query are only readonly, and you have to create a view and link to that view.

Deciding on LINQ to SQL vs StoredProcs

While developing applications, I usually go for Stored Procedures to contain CRUD logic, so as improve performance and maintainability. But after experimenting with LINQ to SQL, I was wondering whether, using compiled LINQ-to-SQL queries over stored procedures will that help improve performance?
LINQ to SQL will not improve your performance, because you will be sending each CRUD operation as a string over the wire.
Performance will still be better with Stored Procedures, but ORM's like Linq to SQL usually make development time faster.
From my experience, I can rank performance as following:
Stored procedures
Native queries (using DBCommand)
Linq to entity (compiled query, EF4)
Linq to SQL (compiled)
Linq to entity (not compiled EF4)
Linq to SQL
ESQL
2,3,4 may change their order depends on the nature of the queries, but in general raw sql query is executed fater.
Based on your comments to both DevSlick and a1ex07, it seems you have a fundamental misunderstanding of what LINQ is. In order for LINQ queries to allow chaining, like
var activePeople = peopleList.Where(o => o.Active).OrderBy(o => o.Ordering).Select(o => o.Name);
the execution of the LINQ query must be delayed until it is enumerated:
foreach(var person in activePeople)
{
//If this is LINQ-to-SQL, the query to peopleList has waited until now to request anything from the database
}
This means that the query .Where(o => o.Active).OrderBy(o => o.Ordering).Select(o => o.Name) is not actually interpreted by your computer until that point as well. If you run the same query 100 times, that means the computer has to reinterpret that query 100 times. For LINQ-to-SQL, that means translating the query to SQL 100 times before that SQL is sent to the database each time, even if the SQL is exactly the same every time.
Compiling the query ahead of time causes it to generate the SQL only once, and use that SQL every time the query is called. This has nothing to do with stored procedures - you would compile a query-to-a-stored-procedure in the same way that you would compile any other query. Asking "which gives better performance" is meaningless, as they are not mutually exclusive.
Though compiling a query sounds like a good thing, in practice interpreting a LINQ query (usually called "evaluating the expression tree") takes very very little time compared to actually executing the SQL against the database, so you get very little benefit for compiling the query. In the meanwhile, the syntax for compiling a query is atrocious:
static readonly Func<AdventureWorksEntities, Decimal, IQueryable<SalesOrderHeader>> s_compiledQuery2 =
CompiledQuery.Compile<AdventureWorksEntities, Decimal, IQueryable<SalesOrderHeader>>(
(ctx, total) => from order in ctx.SalesOrderHeaders
where order.TotalDue >= total
select order);
var orders = s_compiledQuery2.Invoke(context, totalDue);
For this reason, it is usually recommended to simply not compile your LINQ-to-SQL queries, because the ratio of code-noise-to-benefit is terrible.

What is the big deal with IQueryable?

I've seen a lot of people talking about IQueryable and I haven't quite picked up on what all the buzz is about. I always work with generic List's and find they are very rich in the way you can "query" them and work with them, even run LINQ queries against them.
I'm wondering if there is a good reason to start considering a different default collection in my projects.
The IQueryable interface allows you to define parts of a query against a remote LINQ provider (typically against a database, but doesn't have to be) in multiple steps, and with deferred execution.
E.g. your database layer could define some restriction (e.g. based on permissions, security - whatever) by adding a .Where(x => x.......) clause to your query. But this doesn't get executed just yet - e.g. you're not retrieving 150'000 rows that match that criteria.
Instead, you pass up the IQueryable interface to the next level, the business layer, where you might be adding additional requirements and where clauses to your query - again, nothing gets executed just yet, you're also not tossing out 80'000 of your 150'000 rows you retrieved - you're just defining additional query criteria.
And the UI layer might do the same thing, e.g. based on user input in a form or something.
The magic is that you're passing the IQueryable interface through all the layers, adding additional critieria to it - but it doesn't get executed / evaluated until you actually force it. This also means you're not needlessly selecting and retrieving tons of data which you end up discarding afterwards.
You can't really do that with a classic static list - you have to pick the data, possibly discarding a lot of it again later on in the process - you have a static list, after all.
IQueryable allows you to make queries using LINQ, just like the LINQ to Object queries, where the queries are actually "compiled" and run elsewhere.
The most common implementations work against databases. If you use List<T> and LINQ to Objects, you load the entire "table" of data into memory, then run your query against it.
By using IQueryable<T>, the LINQ provide can "translate" your LINQ statement into actual SQL code, and run it on the database. The results can be returned to you and enumerated.
This is much, much more efficient, especially if you're working in N-Tiered systems.
LINQ queries against IEnumerable<T> produce delegates (methods) which, when invoked, perform the described query.
LINQ queries against IQueryable<T> produce expression trees, a data structure which represents the code that produced the query. LINQ providers such as LINQ to SQL interpret these data structures, generating the same query on the target platform (T-SQL in this case).
For an example of how the compiler interprets the query syntax against IQueryable<T>, see my answer to this question:
Building Dynamic LINQ Queries based on Combobox Value

Loading a huge entity tree with EF

I need to load a model, existing of +/- 20 tables from the database with Entity Framework.
So there are probably a few ways of doing this:
Use one huge Include call
Use many Includes calls while manually iterating the model
Use many IsLoaded and Load calls
Here's what happens with the 2 options
EF creates a HUGE query, puts a very heavy load on the DB and then again with mapping the model. So not really an option.
The database gets called a lot, with again pretty big queries.
Again, the database gets called even more, but this time with small loads.
All of these options weigh heavy on the performance. I do need to load all of that data (calculations for drawing).
So what can I do?
a) Heavy operation => heavy load => do nothing :)
b) Review design => but how?
c) A magical option that will make all these problems go away
When you need to load a lot of data from a lack of different tables, there is no "magic" solution which makes all problems go away. But in addition to what you have already discussed, you should consider projection. If you don't need every single property of an entity, it is often cheaper to project the information you do need, i.e.:
from parent in MyEntities.Parents
select new
{
ParentName = ParentName,
Children = from child in parent.Children
select new
{
ChildName = child.Name
}
}
One other thing to keep in mind is that for very large queries, the cost of compiling the query can often exceed the cost of executing it. Only profiling can tell you if this is the problem. If this turns out to be the problem, consider using CompiledQuery.
You might analyze the ratio of queries to updates. If you mostly upload the model once, then everything else is a query, then maybe you should store an XML representation of the model in the database as a "shadow" of the model. You should be able to either read the entire XML column in at once fairly quickly, or else maybe you can do your calculations (or at least the fetch of the values necessary for the calculations) using XQuery.
This assumes SQL Server 2005 or above.
You could consider caching your data in memory instead of getting it from the database each time.
I would recommend Enterprise Library Caching Application block: http://msdn.microsoft.com/en-us/library/dd203099.aspx

Resources