I have this Linq query that translates very oddly to SQL. I get the correct results but there must be a better way. So question 1 is:
Why is it that in SQL I get no group by, no count and all of the
columns are returned instead of just 2; and then the results in C# are correct? (I checked with profiler).
and question 2 is:
I would like to modify the query slightly so that I get also the
results where count is 0. At the moment I only get where counts > 0
because of the group by.
LINQ:
List<Tuple<string, int>> countPerType = db1.Audits
.OrderBy(p => p.CreatedBy)
.GroupBy(o => new { o.Type, o.CreatedBy })
.ToList()
.Select(g => new Tuple<string, int>(g.Select(f => f.CreatedBy + ',' + f.Type).FirstOrDefault(),
(int?)g.Count() ?? 0))
.ToList();
Note that if I remove the .ToList() in the middle, I get exception "only parameterless constructors and initializers are supported in linq to entities".
Thanks for your input
You run into several problems. I think the cause of this is that you aren't aware of the difference between queries that are AsEnumerable and queries that are AsQueryable.
AsEnumerable queries contain all information to enumerate over the elements in the query. The query will be executed by your process.
An AsQueryable query, contains a Expression and a Provider. The Provider knows who will execute the query, and how to communicate with this executer. Quite often the executer will be a database, but it can be other things, like internet queries, jswon files etc.
In your case the executer will be a database, the language will be SQL.
When the GetEnumerator() function of your IQueryable is called, the Provider is ordered to translate the Expression into the language that the executor knows. The translated query is sent to the executor and the returned data is put into an Enumerator (not IEnumerable!)
Of course SQL does not know what a System.Tuple is, nor does it know functions like String.operator+
Therefore your Provider can't translate your expression into SQL. That is the reason you have to do your first ToList()
You can't make queries as IQueryable with any of your own functions, and only a limited amount of .NET functions.
See this list of supported and unsupported Linq methods
It is not advise to use ToList() in this stadium of your query, because it enumerates all elements of your sequence, will in fact you only need an enumerator. It could be that during the rest of your query you'd only want a few elements. In that case it would be a waste to enumerate over all of them to create a list, and then to enumerate again to do the rest of your LINQ.
Instead of ToList() use Enumerable.AsEnumerable(). This will bring all data of the query to local memory and create an IEnumerable of it: the elements are not enumerated yet. This will allow you to call local functions with the rest of your query.
Another problem is that you transport way more data to local memory than you plan to use. One of the slower parts of database queries is the transport of data to your process. You should minimize the amount of data.
You took all Audits, and created groups of Audits that have the same values for (Type, CreatedBy). In other words: all Audits in the same group have the same values for (Type, CreatedBy). This value is also the Key of the group.
You don't want all Audits locally, you only want the Key of the group and the number of elements of this group (= the number of audits that have (Type, CreatedBy) equal to the key.
This is the only data you need to transport to local memory: Type, CreatedBy and the number of audits in the group:
var result = db1.Audits.GroupBy(o => new { o.Type, o.CreatedBy })
.Select(group => new
{
Type = group.Key.Type,
CreatedBy = group.Key.CreatedBy,
AuditCount = group.Count(),
})
.OrderBy(item => item.CreatedBy)
// the data that is left is the data you need locally
// bring to local memory:
.AsEnumerable()
// if you want you can put Type and CreatedBy into one string
.Select(item => new
{
AuditType = item.Type + item.CreatedBy,
AuditCount = item.AuditCount,
});
I chose not to put the result in a Tuple, because you would lose the help from the compiler if you mix up fields. But if you really want to suit yourself.
Related
What is the explanation for EF downloading all result rows when AsEnumerable() is used?
What I mean is that this code:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
will download all the rows from the table before passing any row to the Where() method and there could be millions of rows in the table.
What I would like it to do, is to download only enough to gather 100 rows that would satisfy the Id % 2 == 0 condition (most likely just around 200 rows).
Couldn't EF do on demand loading of rows like you can with plain ADO.NET using Read() method of SqlDataReader and save time and bandwidth?
I suppose that it does not work like that for a reason and I'd like to hear a good argument supporting that design decision.
NOTE: This is a completely contrived example and I know normally you should not use EF this way, but I found this in some existing code and was just surprised my assumptions turned out to be incorrect.
The short answer: The reason for the different behaviors is that, when you use IQueryable directly, a single SQL query can be formed for your entire LINQ query; but when you use IEnumerable, the entire table of data must be loaded.
The long answer: Consider the following code.
context.Logs.Where(x => x.Id % 2 == 0)
context.Logs is of type IQueryable<Log>. IQueryable<Log>.Where is taking an Expression<Func<Log, bool>> as the predicate. The Expression represents an abstract syntax tree; that is, it's more than just code you can run. Think of it as being represented in memory, at runtime, like this:
Lambda (=>)
Parameters
Variable: x
Body
Equals (==)
Modulo (%)
PropertyAccess (.)
Variable: x
Property: Id
Constant: 2
Constant: 0
The LINQ-to-Entities engine can take context.Logs.Where(x => x.Id % 2 == 0) and mechanically convert it into a SQL query that looks something like this:
SELECT *
FROM "Logs"
WHERE "Logs"."Id" % 2 = 0;
If you change your code to context.Logs.Where(x => x.Id % 2 == 0).Take(100), the SQL query becomes something like this:
SELECT *
FROM "Logs"
WHERE "Logs"."Id" % 2 = 0
LIMIT 100;
This is entirely because the LINQ extension methods on IQueryable use Expression instead of just Func.
Now consider context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0). The IEnumerable<Log>.Where extension method is taking a Func<Log, bool> as a predicate. That is only runnable code. It cannot be analyzed to determine its structure; it cannot be used to form a SQL query.
Entity Framework and Linq use lazy loading. It means (among other things) that they will not run the query until they need to enumerate the results: for instance using ToList() or AsEnumerable(), or if the result is used as an enumerator (in a foreach for instance).
Instead, it builds a query using predicates, and returns IQueryable objects to further "pre-filter" the results before actually returning them. You can find more infos here for instance. Entity framework will actually build a SQL query depending on the predicates you have passed it.
In your example:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
From the Logs table in the context, it fetches all, returns a IEnumerable with the results, then filters the result, takes the first 100, then lists the results as a List.
On the other hand, just removing the AsEnumerable solves your problem:
context.Logs.Where(x => x.Id % 2 == 0).Take(100).ToList();
Here it will build a query/filter on the result, then only once the ToList() is executed, query the database.
It also means that you can dynamically build a complex query without actually running it on the DB it until the end, for instance:
var logs = context.Logs.Where(a); // first filter
if (something) {
logs = logs.Where(b); // second filter
}
var results = logs.Take(100).ToList(); // only here is the query actually executed
Update
As mentionned in your comment, you seem to already know what I just wrote, and are just asking for a reason.
It's even simpler: since AsEnumerable casts the results to another type (a IQueryable<T> to IEnumerable<T> in this case), it has to convert all the results rows first, so it has to fetch the data first. It's basically a ToList in this case.
Clearly, you understand why it's better to avoid using AsEnumerable() the way you do in your question.
Also, some of the other answers have made it very clear why calling AsEnumerable() changes the way the query is performed and read. In short, it's because you are then invoking IEnumrable<T> extension methods rather than the IQueryable<T> extension methods, the latter allowing you to combine predicates before executing the query in the database.
However, I still feel that this doesn't answer your actual question, which is a legitimate question. You said (emphasis mine):
What I mean is that this code:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
will download all the rows from the table before passing any row to the Where() method and there could be millions of rows in the table.
My question to you is: what made you conclude that this is true?
I would argue that, because you are using IEnumrable<T> instead of IQueryable<T>, it's true that the query being performed in the database will be a simple:
select * from logs
... without any predicates, unlike what would have happened if you had used IQueryable<T> to invoke Where and Take.
However, the AsEnumerable() method call does not fetch all the rows at that moment, as other answers have implied. In fact, this is the implementation of the AsEnumerable() call:
public static IEnumerable<TSource> AsEnumerable<TSource>(this IEnumerable<TSource> source)
{
return source;
}
There is no fetching going on there. In fact, even the calls to IEnumerable<T>.Where() and IEnumerable<T>.Take() don't actually start fetching any rows at that moment. They simply setup wrapping IEnumerables that will filter results as they are iterated on. The fetching and iterating of the results really only begins when ToList() is called.
So when you say:
Couldn't EF do on demand loading of rows like you can with plain ADO.NET using Read() method of SqlDataReader and save time and bandwidth?
... again, my question to you would be: doesn't it do that already?
If your table had 1,000,000 rows, I would still expect your code snippet to only fetch up to 100 rows that satisfy your Where condition, and then stop fetching rows.
To prove the point, try running the following little program:
static void Main(string[] args)
{
var list = PretendImAOneMillionRecordTable().Where(i => i < 500).Take(10).ToList();
}
private static IEnumerable<int> PretendImAOneMillionRecordTable()
{
for (int i = 0; i < 1000000; i++)
{
Console.WriteLine("fetching {0}", i);
yield return i;
}
}
... when I run it, I only get the following 10 lines of output:
fetching 0
fetching 1
fetching 2
fetching 3
fetching 4
fetching 5
fetching 6
fetching 7
fetching 8
fetching 9
It doesn't iterate through the whole set of 1,000,000 "rows" even though I am chaining Where() and Take() calls on IEnumerable<T>.
Now, you do have to keep in mind that, for your little EF code snippet, if you test it using a very small table, it may actually fetch all the rows at once, if all the rows fit within the value for SqlConnection.PacketSize. This is normal. Every time SqlDataReader.Read() is called, it never only fetches a single row at a time. To reduce the amount of network call roundtrips, it will always try to fetch a batch of rows at a time. I wonder if this is what you observed, and this mislead you into thinking that AsEnumerable() was causing all rows to be fetched from the table.
Even though you will find that your example doesn't perform nearly as bad as you thought, this would not be a reason not to use IQueryable. Using IQueryable to construct more complex database queries will almost always provide better performance, because you can then benefit from database indexes, etc to fetch results more efficiently.
AsEnumerable() eagerly loads the DbSet<T> Logs
You probably want something like
context.Logs.Where(x => x.Id % 2 == 0).AsEnumerable();
The idea here is that you're applying a predicate filter to the collection before actually loading it from the database.
An impressive subset of the world of LINQ is supported by EF. It will translate your beautiful LINQ queries into SQL expressions behind the scenes.
I have come across this before.
The context command is not executed until a linq function is called, because you have done
context.Logs.AsEnumerable()
it has assumed you have finished with the query and therefore compiled it and returns all rows.
If you changed this to:
context.Logs.Where(x => x.Id % 2 == 0).AsEnumerable()
It would compile a SQL statement that would get only the rows where the id is modular 2.
Similarly if you did
context.Logs.Where(x => x.Id % 2 == 0).Take(100).ToList();
that would create a statement that would get the top 100...
I hope that helps.
LinQ to Entities has a store expression formed by all the Linq methods before It goes to an enumeration.
When you use AsEnumerable() and then Where() like this:
context.Logs.Where(...).AsEnumerable()
The Where() knows that the previous chain call has a store expression so he appends his predicate to It for lazy loading.
The overload of Where that is being called is different if you call this:
context.Logs.AsEnumerable().Where(...)
Here the Where() only knows that his previous method is an enumeration (it could be any kind of "enumerable" collection) and the only way that he can apply his condition is iterating over the collection with the IEnumerable implementation of the DbSet class, which must to retrieve the records from the database first.
I don't think you should ever use this:
context.Logs.AsEnumerable().Where(x => x.Id % 2 == 0).Take(100).ToList();
The correct way of doing things would be:
context.Logs.AsQueryable().Where(x => x.Id % 2 == 0).Take(100).ToList();
Answer with explanations here:
What's the difference(s) between .ToList(), .AsEnumerable(), AsQueryable()?
Why use AsQueryable() instead of List()?
So I recently discovered that you can force Entity Framework not to translate your projection into SQL by specifying a Func<T, TResult> to the .Select() extension method rather than an expression. This is useful when you want to transform your queried data, but that transformation should happen in your code rather than on the database.
For example, when using EF5's new Enum support and trying to project that to a string property in a DTO, whereas this would fail:
results.Select(r => new Dto { Status = r.Status.ToString() })
this would work:
results.Select(new Func<Record, Dto>(r => new Dto { Status = r.Status.ToString() }));
because in the first (expression) case, EF can't figure out how to translate Status.ToString() to something the SQL database could perform, but as per this article Func predicates aren't translated.
Once I had this working, it wasn't much of a leap to create the following extension method:
public static IQueryable<T> Materialize<T>(this IQueryable<T> q)
{
return q.Select(new Func<T, T>(t => t)).AsQueryable();
}
So my question is - are there any pitfalls I should be wary of when using this? Is there a performance impact - either in injecting this do-nothing projection into the query pipeline or by causing EF to not send the .Where() clause to the server and thereby send all the results over the wire?
The intention is to still use a .Where() method to filter the results on the server, but then to use .Materialize() before .Select() so that the provider doesn't try to translate the projection to SQL Server:
return Context.Record
.Where(r => // Some filter to limit results)
.Materialize()
.Select(r => // Some projection to DTO, etc.);
Simply using AsEnumerable should do the same:
return Context.Record
.Where(r => // Some filter to limit results)
.AsEnumerable() // All extension methods now accept Func instead of Expression
.Select(r => // Some projection to DTO, etc.);
There is also no reason in your Materialize method to go back to IQueryable because it is not a real IQueryable translated to another query any more. It is just IEnumerable.
In terms of performance you should be OK. Everything before materialization is evaluated in the database and everything after materialization in your code. Moreover in both your and my example query has still deferred execution - it is not executed until something enumerates the query.
There is one problem though: number of columns fetched to client. In first case, it would be something like select Status from Record and in another select Status, field2, field3, field4 from Record
I have a data table containing multiple columns and one column that stores somewhat complex text patterns - I need to parse the field to determine if a particular sub strings exist in specific positions within the larger string pattern and then if the record should be filtered out as a result.
I can't see a way to perform the parse other than by writing a C# parsing function with String.Split method calls, foreach, etc. But if I try to parse like this:
var myFilteredTable = _db.MyTable.Where(t => t.Column1 == 'Filter1'
&& ParseIsMyItemInColumn2(t) );
I get "has no supported translation to SQL" errors.
The other option I thought of was to build the initial result without the Parse:
var myFilteredTable = _db.MyTable.Where(t => t.Column1 == 'Filter1' );
and iterate through the IQueryable resultset, testing each row with the parse function, to filter out the unwanted rows, but IQueryable does not have Remove function to strip out unwanted rows nor Add function to allow me to build up a new resultset.
So how can I filter in linq when I also need to write a Parse function?
Well the "initial filter in the database then do the rest locally" is easy:
var filtered = _db.MyTable.Where(t => t.Column1 == "Filter1")
.AsEnumerable() // Do the rest locally
.Where(t => ParseIsMyItemInColumn2(t));
AsEnumerable is a simple pass through method, but because the result is typed as IEnumerable<T> rather than IQueryable<T>, the subsequent LINQ operations use the LINQ to Objects methods in Enumerable rather than the ones in Queryable.
Obviously if a lot of items match the first filter but fail the second, that won't be terribly efficient...
Unfortunately, if the "parse function" is not something that can be translated to SQL, you will need to pull the results and use LINQ to Objects:
var myFilteredTable = _db.MyTable.Where(t => t.Column1 == 'Filter1')
.AsEnumerable().Where(ParseIsMyItemInColumn2);
Note that this will stream all of the results into memory, and then perform your parse.
I'm using NHibernate 3.2 and I have a repository method that looks like:
public IEnumerable<MyModel> GetActiveMyModel()
{
return from m in Session.Query<MyModel>()
where m.Active == true
select m;
}
Which works as expected. However, sometimes when I use this method I want to filter it further:
var models = MyRepository.GetActiveMyModel();
var filtered = from m in models
where m.ID < 100
select new { m.Name };
Which produces the same SQL as the first one and the second filter and select must be done after the fact. I thought the whole point in LINQ is that it formed an expression tree that was unravelled when it's needed and therefore the correct SQL for the job could be created, saving my database requests.
If not, it means all of my repository methods have to return exactly what is needed and I can't make use of LINQ further down the chain without taking a penalty.
Have I got this wrong?
Updated
In response to the comment below: I omitted the line where I iterate over the results, which causes the initial SQL to be run (WHERE Active = 1) and the second filter (ID < 100) is obviously done in .NET.
Also, If I replace the second chunk of code with
var models = MyRepository.GetActiveMyModel();
var filtered = from m in models
where m.Items.Count > 0
select new { m.Name };
It generates the initial SQL to retrieve the active records and then runs a separate SQL statement for each record to find out how many Items it has, rather than writing something like I'd expect:
SELECT Name
FROM MyModel m
WHERE Active = 1
AND (SELECT COUNT(*) FROM Items WHERE MyModelID = m.ID) > 0
You are returning IEnumerable<MyModel> from the method, which will cause in-memory evaluation from that point on, even if the underlying sequence is IQueryable<MyModel>.
If you want to allow code after GetActiveMyModel to add to the SQL query, return IQueryable<MyModel> instead.
You're running IEnumerable's extension method "Where" instead of IQueryable's. It will still evaluate lazily and give the same output, however it evaluates the IQueryable on entry and you're filtering the collection in memory instead of against the database.
When you later add an extra condition on another table (the count), it has to lazily fetch each and every one of the Items collections from the database since it has already evaluated the IQueryable before it knew about the condition.
(Yes, I would also like to be the extensive extension methods on IEnumerable to instead be virtual members, but, alas, they're not)
I am reading records from database and check some conditions and store in List<Result>. Result is a class. Then performing LINQ query in List<Result> like grouping, counting etc. So there may be chance that min 50,000 records in List<Result>, so in this whether its better to go for LINQ (or) reinsert the records to db and perform the queries?
Why not store it in an IQueryable instead of a List and using LINQ to SQL or LINQ to Entities, the actual dataset will never be pulled into memory, and the queries will actually go down to the database to run.
Example:
Database db = new Database(); // this is what L2E gives you...
var children = db.Person.Where(p => p.Age < 21); // no actual database query performed
// will do : "select count(*) from Person where Age < 21"
int numChildren = children.Count();
var grouped = children.GroupBy(p => p.Age); // no actual query
int youngest = children.Min(p => p.Age); // performs query
int numYoungest = youngest.Count(p => p.Age == youngest); // performs query.
var youngestNames = children.Where(p => p.Age == youngest).Select(p => p.Name); // no query
var anArray = youngestNames.ToArray(); // performs query
string names = string.join(", ", anArray); // no query of course
I'm currently asking the same kind of thing right now. I don't really know the exact answer either, but from what I know, LINQ is not well know to be fast on objects. Also, since List is not indexed, when you do advance query on them, the backend will probably need to do a lot of computing to get what you asked for. Also, this code is generic, so it means slower execution.
The best thing would be, if you are able, do everything in one query, or even do a startproc to do your processing. Or another possibility, if you are always checking the same initial condition, create a view and do your query directly on this table (instead of reinserting from the client). I think that if you have more than 50,000 results, probably using a list is not a good idea (Memory and Performance).
It probably doesn't answer your question directly, but other than doing benchmark, you won't know. It really depends on what you are doing with the data.