I am using the following function to Query an EF6 model using a bunch of query expressions that I have built programmatically:
/// <summary>
/// Common application of generated queries to select the required resources
/// </summary>
/// <typeparam name="T">The EF model type</typeparam>
/// <param name="source">The source of models (a DbSet)</param>
/// <param name="queries">A collection of query pairs (database, local) to be applied as Where clauses</param>
/// <returns></returns>
protected static IEnumerable<string> ApplyCriteriaBase<T>(IQueryable<T> source, IEnumerable<(Expression, Expression)> queries) where T : DbResource
{
// Apply the database queries sequentially as Where clauses
var resources = queries
.Select(p => p.Item1)
.Aggregate(source, (ds, qry) => ds.Where(qry as Expression<Func<T, bool>>))
.Distinct();
// If there are no local queries then don't bother fetching the whole object, just select the id in the generated SQL
if(queries.All(q => q.Item2 == null))
return resources
.Select(a => a.Id.ToString());
// Apply local queries
// AsEnumerable() escapes from the IQueryable monad so the subsequent queries are executed locally on IEnumerable
// This requires fetching the whole object because we dont know which bit is to be tested by the local query
// Note that IEnumerable doesn't understand query expressions so we have to compile them to code and then invoke them
return queries
.Select(p => p.Item2)
.Where(q => q != null)
.Aggregate(
resources.ToList() as IEnumerable<T>, // .AsEnumerable() doesn't work
(ds, qry) => ds.Where((qry as Expression<Func<T, bool>>).Compile().Invoke))
.Select(a => a.Id.ToString());
}
The idea is that queries contains 2-tuples of query expressions where the first of each is to be run against the database (MySql) and the second, if present, is to be run against the EF models returned.
It is my understanding that AsEnumerable() should permit the second query to be executed locally and lazily without pulling all of the results into memory. Unfortunately this fails with the error:
"There is already an open DataReader associated with this Connection
which must be closed first."
The fix shown in the example, converting the result stream to a List, works correctly but is strict.
Why doesn't AsEnumerable() work and is there another way to process the results lazily? Some instances of DbResource are very large and I only want to accumulate the Id of each resource.
Thanks,
Andy
Related
The packages I use:
NHibernate 5.2.1
NHibernate.Caches.SysCache 5.5.1
The NH cache config:
<configuration>
<configSections>
<section name="syscache" type="NHibernate.Caches.SysCache.SysCacheSectionHandler,NHibernate.Caches.SysCache" />
</configSections>
<syscache>
<!-- 3.600s = 1h; priority 3 == normal cost of expiration -->
<cache region="GeoLocation" expiration="3600" sliding="true" priority="3" />
</syscache>
</configuration>
I want to query a bunch of locations using their unique primary keys. In this unit test I simulate two requests using different sessions but the same session factory:
[TestMethod]
public void UnitTest()
{
var sessionProvider = GetSessionProvider();
using (var session = sessionProvider.GetSession())
{
var locations = session
.QueryOver<GeoLocation>().Where(x => x.LocationId.IsIn(new[] {147643, 39020, 172262}))
.Cacheable()
.CacheRegion("GeoLocation")
.List();
Assert.AreEqual(3, locations.Count);
}
Thread.Sleep(1000);
using (var session = sessionProvider.GetSession())
{
var locations = session
.QueryOver<GeoLocation>().Where(x => x.LocationId.IsIn(new[] { 39020, 172262 }))
.Cacheable()
.CacheRegion("GeoLocation")
.List();
Assert.AreEqual(2, locations.Count);
}
}
If the exact same IDs are queried in the exact same order, the second call would fetch the objects from the cache. In this example however, the query is called with only two of the previously submitted IDs. Although the locations have been cached, the second query will fetch them from the DB.
I expected the cache to work like a table that is queried first. Only the IDs that have not been cached yet, should trigger a DB call. But obviously the whole query seems to be the hash key for the cached objects.
Is there any way to change that behavior?
There is no notion of a partial query cache, it's all or nothing: if the results for this exact query are found - they are used, otherwise the database is queried. This is because the query cache system does not have specific knowledge about the meaning of the queries (eg. it cannot infer the fact that result of a particular query is a subset of some cached result).
In other words the query cache in NHibernate acts as a document storage rather than a relation table storage. The key for the document is a combination of the query's SQL (in case of linq some textual representation of the expression tree), all parameter types, and all parameter values.
To solve your particular case I would suggest to do some performance testing. Depending on the tests and a dataset size there are some possible solutions: filter cached results on a client (something like following), or not use query cache, or you can implement some caching mechanism for the particular query on the application level.
[TestMethod]
public void UnitTest()
{
var sessionProvider = GetSessionProvider();
using (var session = sessionProvider.GetSession())
{
var locations = session
.QueryOver<GeoLocation>()
.Cacheable()
.CacheRegion("GeoLocation")
.List()
.Where(x => new[] {147643, 39020, 172262}.Contains(x.LocationId))
.ToList();
Assert.AreEqual(3, locations.Count);
}
Thread.Sleep(1000);
using (var session = sessionProvider.GetSession())
{
var locations = session
.QueryOver<GeoLocation>().
.Cacheable()
.CacheRegion("GeoLocation")
.List()
.Where(x => new[] {39020, 172262}.Contains(x.LocationId))
.ToList();
Assert.AreEqual(2, locations.Count);
}
}
More information on how the (N)Hibernate query cache works, can be found here.
I'm using the C# driver for MongoDB and trying to edit some MongoDB elements. When deserializing BSON, I'm using the [IgnoreExtraElements] tag to filter out fields I don't really care about editing. The problem is when I'm trying to serialize the elements back into the Mongo database. Instead of changing only the fields I've edited, serializing elements back overwrites the whole object.
For example, I'm changing a Word element with C# properties:
[BsonId]
public ObjectId _id;
public string word;
In MongoDB it also has the element "conjugations", which is a kind of complicated array I don't want to serialize or mess with. But when I try something similar to the below code, it blanks out the conjugations array.
MongoWord word = collection.FindOneAs<MongoWord>(Query.EQ("word","hello"));
word.word = "world";
collection.Save(word);
How can I avoid overwriting extra fields in the Json database? Right now I'm trying to write an Update.Set builder query and only update the fields I've changed using Reflection/Generics.. Is there something easy like a reverse [IgnoreExtraElements] or update setting that I'm missing here?
That's why IgnoreExtraElements must be specified manually. You are essentially opting-in to potentially losing data.
The correct way to handle this is to actually support extra elements. You can see this section in the documentation for how to do this: https://mongodb.github.io/mongo-csharp-driver/2.12/reference/bson/mapping/#supporting-extra-elements
This is the function I ended up using. I didn't want to use [BsonExtraElements] since it seemed unnecessary to pull down ExtraElements just to save them again if they hadn't been edited. Instead I'm using the mongo UpdateBuilder to only update the fields that I've changed/brought down from mongo. The solution is problematic if I need to clear fields by setting them to null.
/// <summary>
/// Update a Mongo object without overwriting columns C# think are null or doesn't know about
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="model">The mongo object to update</param>
/// <param name="collection">The collection the object should go in</param>
/// <param name="objectId">The Bson ObjectId of the existing object in collection</param>
public void UpdateNotNullColumns<T>(T model, MongoCollection<T> collection, ObjectId objectId)
{
if (objectId==default(ObjectId))
{
return;
}
else
{
//Build an update query with all the non null fields using reflection
Type type = model.GetType();
FieldInfo[] fields = type.GetFields();
UpdateBuilder builder = new UpdateBuilder();
foreach(var field in fields)
{
BsonValue bsonValue = BsonValue.Create(field.GetValue(model));
if(bsonValue!=null)
{
Type bsonType = bsonValue.GetType();
builder.Set(field.Name, bsonValue);
}
}
//Actually update!
collection.Update(Query.EQ("_id", objectId), builder);
}
}
Lets say I have a method like this:
IQueryable<MyFlatObject> GetMyFlatObjects()
{
using (var context = new MyEntities())
{
return context.MyEntities.Select(x => new MyFlatObject()
{
Property1 = x.PropertyA,
Property2 = x.PropertyB,
Property3 = x.PropertyC,
});
}
}
Now if I call:
MyService.GetMyFlatObjects().Where(x => x.Property1 == "test");
Sanity check. This filter will not propagate to my database store (like if I had just queried my entities), but instead I will get all results back and be using LINQ-to-objects to filter. Right?
I think, it's not right. First, it doesn't query anything because you are only extending an IQueryable<T> to a new IQueryable<T>. If you call ToList() or anything else that causes the query to execute you'll get an exception because the context has already been disposed at the end of the using block. If you don't dispose the context the Where filter will be translated to SQL and executed in the database. I believe it will behave the same way as if you would apply the Where to PropertyA before the Select.
I am trying to query a Azure Table Storage. For that I use the following two methods:
TableServiceContext:
public IQueryable<T> QueryEntities<T>(string tableName) where T : TableServiceEntity
{
this.ResolveType = (unused) => typeof(T);
return this.CreateQuery<T>(tableName);
}
Code that uses the method above:
CloudStorageAccount account = AzureConnector.GetCloudStorageAccount(AppSettingsVariables.TableStorageConnection);
AzureTableStorageContext context = new AzureTableStorageContext(account.TableEndpoint.ToString(), account.Credentials);
// Checks if the setting already exists in the Azure table storage. Returns null if not exists.
var existsQuery = from e in context.QueryEntities<ServiceSettingEntity>(TableName)
where e.ServiceName.Equals(this.ServiceName) && e.SettingName.Equals(settingName)
select e;
ServiceSettingEntity existingSettginEntity = existsQuery.FirstOrDefault();
The LINQ query above generates the following request url:
http://127.0.0.1:10002/devstoreaccount1/PublicSpaceNotificationSettingsTable()?$filter=(ServiceName eq 'PublicSpaceNotification') and (SettingName eq 'expectiss')
The code in the class generates the following MissingMethodException:
I have looked at the supported LINQ Queries for the Table API;
Looked at several working stackoverflow solutions;
Tried IgnoreResourceNotFoundException on the TableServiceContext (usercomments of QueryOperators);
Tried to convert the linq query with ToList() before calling first or default (usercomments of QueryOperators).
but I can't get this to work.
Make sure you have parameterless constructor for the class "ServerSettingEntity". The ‘DTO’ that inherits TableServiceEntity needs a constructor with no parameters.
I have the following method I can pass in a lambda expression to filter my result and then a callback method that will work on the list of results. This is just one particular table in my system, I will use this construct over and over. How can I build out a generic method, say DBget that takes a Table as a parameter(An ADO.NET dataservice entity to be fair) and pass in a filter (a lambda experssion).
public void getServiceDevelopmentPlan(Expression<Func<tblServiceDevelopmentPlan, bool>> filter, Action<List<tblServiceDevelopmentPlan>> callback)
{
var query = from employerSector in sdContext.tblServiceDevelopmentPlan.Where(filter)
select employerSector;
var DSQuery = (DataServiceQuery<tblServiceDevelopmentPlan>)query;
DSQuery.BeginExecute(result =>
{
callback(DSQuery.EndExecute(result).ToList<tblServiceDevelopmentPlan>());
}, null);
}
My first bash at this is:
public delegate Action<List<Table>> DBAccess<Table>(Expression<Func<Table, bool>> filter);
If you are using Linq to Ado.NET Dataservices or WCF Dataservices, your model will build you a lot of typed. Generally though you will be selecting and filtering. You need the following, then all your methods are just candy over the top of this:
Query Type 1 - One Filter, returns a list:
public void makeQuery<T>(string entity, Expression<Func<T, bool>> filter, Action<List<T>> callback)
{
IQueryable<T> query = plussContext.CreateQuery<T>(entity).Where(filter);
var DSQuery = (DataServiceQuery<T>)query;
DSQuery.BeginExecute(result =>
{
callback(DSQuery.EndExecute(result).ToList<T>());
}, null);
}
Query Type 2 - One Filter, returns a single entity:
public void makeQuery(string entity, Expression> filter, Action callback)
{
IQueryable<T> query = plussContext.CreateQuery<T>(entity).Where(filter);
var DSQuery = (DataServiceQuery<T>)query;
DSQuery.BeginExecute(result =>
{
callback(DSQuery.EndExecute(result).First<T>());
}, null);
}
What you need to do is overload these and swap out the filter for a simple array of filters
Expression<Func<T, bool>>[] filter
And repeat for single and list returns.
Bundle this into a singleton if you want one datacontext, or keep track of an array of contexts in some sort of hybrid factory/singleton and you are away. Let the constructor take a context or if non are supplied then use its own and you are away.
I then use this on a big line but all in one place:
GenericQuery.Instance.Create().makeQuery<tblAgencyBranches>("tblAgencyBranches", f => f.tblAgencies.agencyID == _agency.agencyID, res => { AgenciesBranch.ItemsSource = res; });
This may look complicated but it hides a lot of async magic, and in certain instances can be called straight from the button handlers. Not so much a 3 tier system, but a huge time saver.