Linq woes - A CollectionType is required - linq

Why doesn't my linq query work?
I suspect it may have something to do with lazy loading. It seems to work in linqpad.
public IList<PaymentDto> GetDocumentPayments(int documentId, bool? allowRepeatPayments, byte[] paymentStatuses, int[] paymentMethods)
{
using (var ctx = ObjectContextManager<MyDataContext>.GetManager("MyDataContext"))
{
var payments = new List<PaymentDto>();
var ps = new List<byte>();
if (paymentStatuses != null)
{
ps = paymentStatuses.ToList();
}
var pm = new List<int>();
if (paymentMethods != null)
{
pm = paymentMethods.ToList();
}
IQueryable<Payment> data =
from payment in ctx.ObjectContext.Documents.OfType<Payment>()
where
ps.Contains(payment.Status) &&
pm.Contains(payment.Method) &&
payment.DocumentId == documentId &&
(allowRepeatPayments == null || payment.AllowRepeatPayments == allowRepeatPayments)
orderby payment.Id
select payment;
foreach (var p in data) // Fails here
{
payments.Add(ReadData(p));
}
return payments;
}
}
Throws error: A CollectionType is required.
Parameter name: collectionType.

Constructs like (allowRepeatPayments == null || payment.AllowRepeatPayments == allowRepeatPayments) can do funny thing to a query. Try what happens when you do:
if (allowRepeatPayments.HasValue)
{
data = data.Where(p => p.AllowRepeatPayments == allowRepeatPayments);
}
You can do the same for paymentStatuses and paymentMethods.
It may solve your problem, but if not, it is an improvement anyway, because the condition is only added when it is necessary and the SQL is not cluttered when it isn't.

The exception is thrown by internal methods of the framework while translating LINQ query to SQL, with the same kind of constructs for null checks as the one you are using (for example
.Where(data => x & y & (values == null || values.Contains(data.value)));)
I just experienced the same exception on a server running the earliest version of .Net 4 RTM (4.0.30319.1).
It comes from
exception : System.ArgumentException: A CollectionType is required.
Parameter name: collectionType
at System.Data.Common.CommandTrees.ExpressionBuilder.Internal.ArgumentValidation.ValidateNewEmptyCollection(TypeUsage collectionType, DbExpressionList& validElements)
at System.Data.Objects.ELinq.ExpressionConverter.NewArrayInitTranslator.TypedTranslate(ExpressionConverter parent, NewArrayExpression linq)
at System.Data.Objects.ELinq.ExpressionConverter.TranslateExpression(Expression linq)
at System.Data.Objects.ELinq.ExpressionConverter.ConstantTranslator.TypedTranslate(ExpressionConverter parent, ConstantExpression linq)
at System.Data.Objects.ELinq.ExpressionConverter.TranslateExpression(Expression linq)
at System.Data.Objects.ELinq.ExpressionConverter.MethodCallTranslator.UnarySequenceMethodTranslator.Translate(ExpressionConverter parent, MethodCallExpression call) ...
The error is very rare on the internet, and it doesn't happen with same conditions and newer versions, so it seems to have been fixed on more recent versions of .Net.
The suggestion of Gert Arnold will probably also allow to avoid the error, but the form above is oftenly used (like is SQL counterpart) and it's short and useful.
So for those who still encounter this error and don't understand why it works on some machine and sometimes not, I suggest to check their .Net FW version.

Related

ef core - multi tenancy using global filters

OnModelCreating is called once per db context. This is a problem since the tenant Id is set per request.
How do I re-configure the global filter everytime I create an new instance of the dbcontext?
If I can't use global filter, what is the alternative way?
Update:
I needed to provide a generic filter with an expression like e => e.TenantId == _tenantId. I am using the following expression:
var p = Expression.Parameter(type, "e");
Expression.Lambda(
Expression.Equal(
Expression.Property(p, tenantIdProperty.PropertyInfo),
Expression.Constant(_tenantId))
p);
Since this is run once, _tenantId is fixed. So even if I update it, the first value is captured in the linq expression.
So my question, what is the proper way to set the right side of that equality.
With EF.Core you can actually use the following filter and syntax
protected void OnModelCreating(ModelBuilder modelBuilder)
{
var entityConfiguration = modelBuilder.Entity<MyTenantAwareEntity>();
entityConfiguration.ToTable("my_table")
.HasQueryFilter(e => EF.Property<string>(e, "TenantId") == _tenantProvider.GetTenant())
[...]
The _tenantProvider is the class responsible to get your tenant, in your case from the HttpRequest, to do it you can use HttpContextAccessor.
This is fixed with the following as the right expression
Expression.MakeMemberAccess(
Expression.Constant(this, baseDbContextType),
baseDbContextType.GetProperty("TenantId")
I use a base class for all my db contexts.
GetProperty() works as is because TenantId is public property.
and..
If you use it with soft delete, the solution is; --for ef core 3.1
internal static void AddQueryFilter<T>(this EntityTypeBuilder
entityTypeBuilder, Expression<Func<T, bool>> expression)
{
var parameterType = Expression.Parameter(entityTypeBuilder.Metadata.ClrType);
var expressionFilter = ReplacingExpressionVisitor.Replace(
expression.Parameters.Single(), parameterType, expression.Body);
var currentQueryFilter = entityTypeBuilder.Metadata.GetQueryFilter();
if (currentQueryFilter != null)
{
var currentExpressionFilter = ReplacingExpressionVisitor.Replace(
currentQueryFilter.Parameters.Single(), parameterType, currentQueryFilter.Body);
expressionFilter = Expression.AndAlso(currentExpressionFilter, expressionFilter);
}
var lambdaExpression = Expression.Lambda(expressionFilter, parameterType);
entityTypeBuilder.HasQueryFilter(lambdaExpression);
}
Usage:
if (typeof(ITrackSoftDelete).IsAssignableFrom(entityType.ClrType))
modelBuilder.Entity(entityType.ClrType).AddQueryFilter<ITrackSoftDelete>(e => IsSoftDeleteFilterEnabled == false || e.IsDeleted == false);
if (typeof(ITrackTenant).IsAssignableFrom(entityType.ClrType))
modelBuilder.Entity(entityType.ClrType).AddQueryFilter<ITrackTenant>(e => e.TenantId == MyTenantId);
Thanks to YZahringer

Scalable Contains method for LINQ against a SQL backend

I'm looking for an elegant way to execute a Contains() statement in a scalable way. Please allow me to give some background before I come to the actual question.
The IN statement
In Entity Framework and LINQ to SQL the Contains statement is translated as a SQL IN statement. For instance, from this statement:
var ids = Enumerable.Range(1,10);
var courses = Courses.Where(c => ids.Contains(c.CourseID)).ToList();
Entity Framework will generate
SELECT
[Extent1].[CourseID] AS [CourseID],
[Extent1].[Title] AS [Title],
[Extent1].[Credits] AS [Credits],
[Extent1].[DepartmentID] AS [DepartmentID]
FROM [dbo].[Course] AS [Extent1]
WHERE [Extent1].[CourseID] IN (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
Unfortunately, the In statement is not scalable. As per MSDN:
Including an extremely large number of values (many thousands) in an IN clause can consume resources and return errors 8623 or 8632
which has to do with running out of resources or exceeding expression limits.
But before these errors occur, the IN statement becomes increasingly slow with growing numbers of items. I can't find documentation about its growth rate, but it performs well up to a few thousands of items, but beyond that it gets dramatically slow. (Based on SQL Server experiences).
Scalable
We can't always avoid this statement. A JOIN with the source data in stead would generally perform much better, but that's only possible when the source data is in the same context. Here I'm dealing with data coming from a client in a disconnected scenario. So I have been looking for a scalable solution. A satisfactory approach turned out to be cutting the operation into chunks:
var courses = ids.ToChunks(1000)
.Select(chunk => Courses.Where(c => chunk.Contains(c.CourseID)))
.SelectMany(x => x).ToList();
(where ToChunks is this little extension method).
This executes the query in chunks of 1000 that all perform well enough. With e.g. 5000 items, 5 queries will run that together are likely to be faster than one query with 5000 items.
But not DRY
But of course I don't want to scatter this construct all over my code. I am looking for an extension method by which any IQueryable<T> can be transformed into a chunky executing statement. Ideally something like this:
var courses = Courses.Where(c => ids.Contains(c.CourseID))
.AsChunky(1000)
.ToList();
But maybe this
var courses = Courses.ChunkyContains(c => c.CourseID, ids, 1000)
.ToList();
I've given the latter solution a first shot:
public static IEnumerable<TEntity> ChunkyContains<TEntity, TContains>(
this IQueryable<TEntity> query,
Expression<Func<TEntity,TContains>> match,
IEnumerable<TContains> containList,
int chunkSize = 500)
{
return containList.ToChunks(chunkSize)
.Select (chunk => query.Where(x => chunk.Contains(match)))
.SelectMany(x => x);
}
Obviously, the part x => chunk.Contains(match) doesn't compile. But I don't know how to manipulate the match expression into a Contains expression.
Maybe someone can help me make this solution work. And of course I'm open to other approaches to make this statement scalable.
I’ve solved this problem with a little different approach a view month ago. Maybe it’s a good solution for you too.
I didn’t want my solution to change the query itself. So a ids.ChunkContains(p.Id) or a special WhereContains method was unfeasible. Also should the solution be able to combine a Contains with another filter as well as using the same collection multiple times.
db.TestEntities.Where(p => (ids.Contains(p.Id) || ids.Contains(p.ParentId)) && p.Name.StartsWith("Test"))
So I tried to encapsulate the logic in a special ToList method that could rewrite the Expression for a specified collection to be queried in chunks.
var ids = Enumerable.Range(1, 11);
var result = db.TestEntities.Where(p => Ids.Contains(p.Id) && p.Name.StartsWith ("Test"))
.ToChunkedList(ids,4);
To rewrite the expression tree I discovered all Contains Method calls from local collections in the query with a view helping classes.
private class ContainsExpression
{
public ContainsExpression(MethodCallExpression methodCall)
{
this.MethodCall = methodCall;
}
public MethodCallExpression MethodCall { get; private set; }
public object GetValue()
{
var parent = MethodCall.Object ?? MethodCall.Arguments.FirstOrDefault();
return Expression.Lambda<Func<object>>(parent).Compile()();
}
public bool IsLocalList()
{
Expression parent = MethodCall.Object ?? MethodCall.Arguments.FirstOrDefault();
while (parent != null) {
if (parent is ConstantExpression)
return true;
var member = parent as MemberExpression;
if (member != null) {
parent = member.Expression;
} else {
parent = null;
}
}
return false;
}
}
private class FindExpressionVisitor<T> : ExpressionVisitor where T : Expression
{
public List<T> FoundItems { get; private set; }
public FindExpressionVisitor()
{
this.FoundItems = new List<T>();
}
public override Expression Visit(Expression node)
{
var found = node as T;
if (found != null) {
this.FoundItems.Add(found);
}
return base.Visit(node);
}
}
public static List<T> ToChunkedList<T, TValue>(this IQueryable<T> query, IEnumerable<TValue> list, int chunkSize)
{
var finder = new FindExpressionVisitor<MethodCallExpression>();
finder.Visit(query.Expression);
var methodCalls = finder.FoundItems.Where(p => p.Method.Name == "Contains").Select(p => new ContainsExpression(p)).Where(p => p.IsLocalList()).ToList();
var localLists = methodCalls.Where(p => p.GetValue() == list).ToList();
If the local collection passed in the ToChunkedList method was found in the query expression, I replace the Contains call to the original list with a new call to a temporary list containing the ids for one batch.
if (localLists.Any()) {
var result = new List<T>();
var valueList = new List<TValue>();
var containsMethod = typeof(Enumerable).GetMethods(BindingFlags.Static | BindingFlags.Public)
.Single(p => p.Name == "Contains" && p.GetParameters().Count() == 2)
.MakeGenericMethod(typeof(TValue));
var queryExpression = query.Expression;
foreach (var item in localLists) {
var parameter = new List<Expression>();
parameter.Add(Expression.Constant(valueList));
if (item.MethodCall.Object == null) {
parameter.AddRange(item.MethodCall.Arguments.Skip(1));
} else {
parameter.AddRange(item.MethodCall.Arguments);
}
var call = Expression.Call(containsMethod, parameter.ToArray());
var replacer = new ExpressionReplacer(item.MethodCall,call);
queryExpression = replacer.Visit(queryExpression);
}
var chunkQuery = query.Provider.CreateQuery<T>(queryExpression);
for (int i = 0; i < Math.Ceiling((decimal)list.Count() / chunkSize); i++) {
valueList.Clear();
valueList.AddRange(list.Skip(i * chunkSize).Take(chunkSize));
result.AddRange(chunkQuery.ToList());
}
return result;
}
// if the collection was not found return query.ToList()
return query.ToList();
Expression Replacer:
private class ExpressionReplacer : ExpressionVisitor {
private Expression find, replace;
public ExpressionReplacer(Expression find, Expression replace)
{
this.find = find;
this.replace = replace;
}
public override Expression Visit(Expression node)
{
if (node == this.find)
return this.replace;
return base.Visit(node);
}
}
Please allow me to provide an alternative to the Chunky approach.
The technique involving Contains in your predicate works well for:
A constant list of values (no volatile).
A small list of values.
Contains will do great if your local data has those two characteristics because these small set of values will be hardcoded in the final SQL query.
The problem begins when your list of values has entropy (non-constant). As of this writing, Entity Framework (Classic and Core) do not try to parameterize these values in any way, this forces SQL Server to generate a query plan every time it sees a new combination of values in your query. This operation is expensive and gets aggravated by the overall complexity of your query (e.g. many tables, a lot of values in the list, etc.).
The Chunky approach still suffers from this SQL Server query plan cache pollution problem, because it does not parametrizes the query, it just moves the cost of creating a big execution plan into smaller ones that are more easy to compute (and discard) by SQL Server, furthermore, every chunk adds an additional round-trip to the database, which increases the time needed to resolve the query.
An Efficient Solution for EF Core
🎉 NEW! QueryableValues EF6 Edition has arrived!
For EF Core keep reading below.
Wouldn't it be nice to have a way of composing local data in your query in a way that's SQL Server friendly? Enter QueryableValues.
I designed this library with these two main goals:
It MUST solve the SQL Server's query plan cache pollution problem ✅
It MUST be fast! ⚡
It has a flexible API that allows you to compose local data provided by an IEnumerable<T> and you get back an IQueryable<T>; just use it as if it were another entity of your DbContext (really), e.g.:
// Sample values.
IEnumerable<int> values = Enumerable.Range(1, 1000);
// Using a Join (query syntax).
var query1 =
from e in dbContext.MyEntities
join v in dbContext.AsQueryableValues(values) on e.Id equals v
select new
{
e.Id,
e.Name
};
// Using Contains (method syntax)
var query2 = dbContext.MyEntities
.Where(e => dbContext.AsQueryableValues(values).Contains(e.Id))
.Select(e => new
{
e.Id,
e.Name
});
You can also compose complex types!
It goes without saying that the provided IEnumerable<T> is only enumerated at the time that your query is materialized (not before), preserving the same behavior of EF Core in this regard.
How Does It Works?
Internally QueryableValues creates a parameterized query and provides your values in a serialized format that is natively understood by SQL Server. This allows your query to be resolved with a single round-trip to the database and avoids creating a new query plan on subsequent executions due to the parameterized nature of it.
Useful Links
Nuget Package
GitHub Repository
Benchmarks
SQL Server Cache Pollution Problem
QueryableValues is distributed under the MIT license
Linqkit to the rescue! Might be a better way that does it directly, but this seems to work fine and makes it pretty clear what's being done. The addition being AsExpandable(), which lets you use the Invoke extension.
using LinqKit;
public static IEnumerable<TEntity> ChunkyContains<TEntity, TContains>(
this IQueryable<TEntity> query,
Expression<Func<TEntity,TContains>> match,
IEnumerable<TContains> containList,
int chunkSize = 500)
{
return containList
.ToChunks(chunkSize)
.Select (chunk => query.AsExpandable()
.Where(x => chunk.Contains(match.Invoke(x))))
.SelectMany(x => x);
}
You might also want to do this:
containsList.Distinct()
.ToChunks(chunkSize)
...or something similar so you don't get duplicate results if something this occurs:
query.ChunkyContains(x => x.Id, new List<int> { 1, 1 }, 1);
Another way would be to build the predicate this way (of course, some parts should be improved, just giving the idea).
public static Expression<Func<TEntity, bool>> ContainsPredicate<TEntity, TContains>(this IEnumerable<TContains> chunk, Expression<Func<TEntity, TContains>> match)
{
return Expression.Lambda<Func<TEntity, bool>>(Expression.Call(
typeof (Enumerable),
"Contains",
new[]
{
typeof (TContains)
},
Expression.Constant(chunk, typeof(IEnumerable<TContains>)), match.Body),
match.Parameters);
}
which you could call in your ChunkContains method
return containList.ToChunks(chunkSize)
.Select(chunk => query.Where(ContainsPredicate(chunk, match)))
.SelectMany(x => x);
Using a stored procedure with a table valued parameter could also work well. You in effect write a joint In the stored procedure between your table / view and the table valued parameter.
https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/table-valued-parameters

Determine the root object of a MemberExpression in a LINQ Expression Tree

I'm currently working in a project that involves the encryption of a few columns in an existing database. There is quite a lot of code already written against the current schema, a lot of which is in the form of custom linq-to-sql queries. The number of queries is in the neighbourhood of a 5 figure number, so modifying and re-testing each and everyone of them would be way too expensive.
An alternative we found is to keep the DB schema the same --only altering the columns length slightly, which mean we don't need to change our current entity class definitions-- and instead, changing the expression trees on-the-fly, before they reach the l2sql IQueryProvider, and apply a decryption function on the columns I need. I do this by wrapping the pertinent Table<TEntity> properties of my DataContext with a custom IQueryable<TEntity> implementation, which allows me to preview every single query in the system.
In my current implementation, say I've got this query:
var mydate = new DateTime(2013, 1, 1);
var context = new DataContextFactory.GetClientsContext();
Expression<Func<string>> foo = context.MyClients.First(
c => c.BirthDay < mydate).EncryptedColumn;
but when I catch the query, I change it to read:
Expression<Func<string>> foo = context.Decrypt(
context.MyClients.First(c => c.BirthDay < mydate).EncryptedColumn);
I do this using the ExpressionVisitor class. In the VisitMember method, I check and see whether the current MemberExpression refers to an encrypted column. If it does, I substitute the expression for a method call:
private const string FuncName = "Decrypt";
protected override Expression VisitMember(MemberExpression ma)
{
if (datactx != null && IsEncryptedColumnReference(ma))
return MakeCallExpression(ma);
}
return base.VisitMember(ma);
}
private static bool IsEncryptedColumnReference(MemberExpression ma)
{
return ma.Member.Name == "EncryptedColumn"
&& ma.Member.DeclaringType == typeof(MyClient);
}
private Expression MakeCallExpression(MemberExpression ma)
{
const BindingFlags flags = BindingFlags.Instance | BindingFlags.Public;
var mi = typeof(MyDataContext).GetMethod(FuncName, flags);
return Expression.Call(datactx, mi, ma);
}
datactx is an instance variable with a reference to the expression pointing at the current datacontext (which I look up in a previous pass).
My problem is that if I have a query such as:
var qbeClient = new MyClient { EncryptedColumn = "FooBar" };
Expression<Func<MyClient>> dbquery = () => context.MyClients.First(
c => c.EncryptedColumn == qbeClient.EncryptedColumn);
I want it to be turned into:
Expression<Func<MyClient>> dbquery = () => context.MyClients.First(c =>
context.Decrypt(c.EncryptedColumn) == qbeClient.EncryptedColumn);
instead, what I'm getting is this:
Expression<Func<MyClient>> dbquery = () => context.MyClients.First(c =>
context.Decrypt(c.EncryptedColumn) == context.Decrypt(qbeClient.EncryptedColumn));
Which I don't want, because when I've got an in-memory object, the data is already unencrypted (besides, I don't want a nasty db function call against my objects!)
So, that's basically my question: Having a MemberExpression instance, how can I determine whether it refers to an in-memory object or a row in the database?
Thanks in advance
Edit:
#Shlomo's code actually solves the case I posted, but now one of my previous tests got broken:
var context = new DataContextFactory.GetClientsContext();
Expression<Func<string>> expr = context.MyClients.First().EncryptedColumn;
Expression<Func<string>> expected = context.Decrypt(
context.MyClients.First().EncryptedColumn);
var actual = MyVisitor.Visit(expr);
Assert.AreEqual(expected.ToString(), actual.ToString());
In this case, the reference to EncryptedColumn isn't a parameter, but it should definitely be taken into account by the visitor!
A MemberExpression representing a DB row will be a descendent of a ParameterExpression. In-Memory objects will not, they'll most likely come from some form of a FieldExpression.
In your case, something like this will work for most cases (adding one method to your code, and revising your VisitMember method:
private bool IsFromParameter(MemberExpression ma)
{
if(ma.Expression.NodeType == ExpressionType.Parameter)
return true;
if(ma.Expression is MemberExpression)
return IsFromParameter(ma.Expression as MemberExpression);
return false;
}
protected override Expression VisitMember(MemberExpression ma)
{
if (datactx != null && IsEncryptedColumnReference(ma) && IsFromParameter(ma))
return MakeCallExpression(ma);
}
return base.VisitMember(ma);
}

How to extend LINQ select method in my own way

The following statement works fine if the source is not null:
Filters.Selection
.Select(o => new GetInputItem() { ItemID = o.ItemId })
It bombs if "Filters.Selection" is null (obviously). Is there any possible way to write my own extension method which returns null if the source is null or else execute the "Select" func, if the source is not null.
Say, something like the following:
var s = Filters.Selection
.MyOwnSelect(o => new GetInputItem() { ItemID = o.ItemId })
"s" would be null if "Filters.Selection" is null, or else, "s" would contain the evaluated "func" using LINQ Select.
This is only to learn more about LINQ extensions/customizations.
thanks.
You could do this:
public static IEnumerable<U> SelectOrNull<T,U>(this IEnumerable<T> seq, Func<T,U> map)
{
if (seq == null)
return Enumerable.Empty<U>(); // Or return null, though this will play nicely with other operations
return seq.Select(map);
}
Yes have a look at the Enumerable and Queryable classes in the framework, they implement the standard query operators.
You would need to implement a similar class with the same Select extension methods matching the same signatures, then if the source is null exit early, you should return an empty sequence.
Assuming you're talking about LINQ to Objects, absolutely:
public static class NullSafeLinq
{
public static IEnumerable<TResult> NullSafeSelect<TSource, TResult>
(this IEnumerable<TSource> source, Func<TSource, TResult> selector)
{
// We don't intend to be safe against null projections...
if (selector == null)
{
throw new ArgumentNullException("selector");
}
return source == null ? null : source.Select(selector);
}
}
You may also want to read my Edulinq blog post series to learn more about how LINQ to Objects works.

How to do a nested count with OData and LINQ?

Here is the query I am trying to run from my OData source:
var query = from j in _auditService.AuditJobs.IncludeTotalCount()
orderby j.Description
select new
{
JobId = j.ID,
Description = j.Description,
SubscriberCount = j.JobRuns.Count()
};
It runs great if I don't use the j.JobRuns.Count(), but if I include it I get the following error:
Constructing or initializing instances
of the type
<>f__AnonymousType1`3[System.Int32,System.String,System.Int32]
with the expression j.JobRuns.Count()
is not supported.
It seems to be a problem of attempting to get the nested count through OData. What is a work around for this? I was trying to avoid getting the whole nested collection for each object just to get a count.
Thanks!
As of today the OData protocol doesn't support aggregates.
Projections yes, but projections that include aggregate properties no.
Alex
You need .Net 4.0 and In LinqPad you can run following over netflix OData Service
void Main()
{
ShowPeopleWithAwards();
ShowTitles();
}
// Define other methods and classes here
public void ShowPeopleWithAwards()
{
var people = from p in People.Expand("Awards").AsEnumerable()
where p.Awards.Count > 0
orderby p.Name
select new
{
p.Id,
p.Name,
AwardCount = p.Awards.Count,
TotalAwards = p.Awards.OrderBy (a => a.Type).Select (b => new { b.Type, b.Year} )
};
people.Dump();
}
public void ShowTitles()
{
var titles = from t in Titles.Expand("Awards").AsEnumerable()
where t.ShortName != string.Empty &&
t.ShortSynopsis != string.Empty &&
t.Awards.Count > 0
select t;
titles.Dump();
}
You can now use my product AdaptiveLINQ and the extension method QueryByCube.

Resources