Insert a list of objects into my room database at once, and the order of objects is changed - android-room

I tried to insert a list of objects from my legacy litepal database into my room database, and when I retrieved them from my room database, I found the order of my objects is no longer the same as that of my old list.
Below is my code:
// litepal is my legacy database
val litepalNoteList = LitePal.findAll(Note::class.java)
if (litepalNoteList.isNotEmpty()) {
litepalNoteList.forEach { note ->
// before insertion, I want to trun my legacy note objects into Traininglog objects
// note and Traninglog are of different types, but their content should be the same
val noteContent = note.note_content
val htmlContent = note.html_note_content
val createdDate = note.created_date
val isMarked = note.isLevelUp
val legacyLog = TrainingLog(
noteContent = noteContent,
htmlLogContent = htmlContent,
createdDate = createdDate,
isMarked = isMarked)
logViewModel.viewModelScope.launch(Dispatchers.IO) {
trainingLogDao.insertNewTrainingLog(legacyLog)
} // the end of forEach
}
The problem is that in my room database, the order of TraningLog objects differs randomly from that of my old list in the Litepal database.
Anyone konw why is this happening?

If the order matters, then you should extract data using the ORDER BY phrase. Otherwise you are leaving the order up to the query optimiser.
So say instead of #Query("SELECT * FROM trainingLog") then you could ORDER the result by using #Query("SELECT * FROM trainingLog ORDER BY createdDate ASC")
The efficiency of extracting the above would be improved by having an index on the createdDate column/field (in room #ColumnInfo(index = true)). However, it should be noted that there are overheads to having an index. Insertions and deletions and updates may incur additional processing to maintain the index. Additionally an index uses more space.
You may wish to have an insert function that can take a list rather than run multiple threaded inserts. Room will then (I believe) do all the inserts in a single transaction (1 disk write instead of many).
e.g.
instead of or as well as
#Insert
fun insert(trainingLog: TrainingLog): Long
you could have
#Insert
fun insert(trainingLogList: List<TrainingLog>): LongArray
Then all you need to do is build the List in your loop and then after the loop invoke the single insert.

Related

Getting max value on server (Entity Framework)

I'm using EF Core but I'm not really an expert with it, especially when it comes to details like querying tables in a performant manner...
So what I try to do is simply get the max-value of one column from a table with filtered data.
What I have so far is this:
protected override void ReadExistingDBEntry()
{
using Model.ResultContext db = new();
// Filter Tabledata to the Rows relevant to us. the whole Table may contain 0 rows or millions of them
IQueryable<Measurement> dbMeasuringsExisting = db.Measurements
.Where(meas => meas.MeasuringInstanceGuid == Globals.MeasProgInstance.Guid
&& meas.MachineId == DBMatchingItem.Id);
if (dbMeasuringsExisting.Any())
{
// the max value we're interested in. Still dbMeasuringsExisting could contain millions of rows
iMaxMessID = dbMeasuringsExisting.Max(meas => meas.MessID);
}
}
The equivalent SQL to what I want would be something like this.
select max(MessID)
from Measurement
where MeasuringInstanceGuid = Globals.MeasProgInstance.Guid
and MachineId = DBMatchingItem.Id;
While the above code works (it returns the correct value), I think it has a performance issue when the database table is getting larger, because the max filtering is done at the client-side after all rows are transferred, or am I wrong here?
How to do it better? I want the database server to filter my data. Of course I don't want any SQL script ;-)
This can be addressed by typing the return as nullable so that you do not get a returned error and then applying a default value for the int. Alternatively, you can just assign it to a nullable int. Note, the assumption here of an integer return type of the ID. The same principal would apply to a Guid as well.
int MaxMessID = dbMeasuringsExisting.Max(p => (int?)p.MessID) ?? 0;
There is no need for the Any() statement as that causes an additional trip to the database which is not desirable in this case.

How to get a SOQL query out of a for loop

Code added.
I have searched endlessly for a solution here and cannot find one, please help!
I have three objects (A, B, and C). A has a lookup to B, and B is the master to C (detail). Both A and C have many records related to each B record.
I want to have a job run that gets a subset of records from object C (it will usually be around 5,000 records). Then go through each of those and get the records on Object A that lookup to the same Object B record, summarize an Object A number field, and put that on the C record.
I have successfully gotten this to work in small scale, <100 Object C records. But each Object C record requires a new SOQL query since I am iterating through them in a for loop after I get all the Object C records. Plus I know this it is not best practice to ever have a query in a loop.
How can I get this to work? Since the records share the relationship with Object B, is there another way to get the data from the Object A records that match? Or is there some way to pull two lists, one Object C and one Object A. Then summarize the Object A records and line the lists up some how?
Thanks in advance!
Code:
public class nightlyJob {
public static void updateNumbers(){
integer I = 29;
List<ObjectC__c> CUpdateList = new List<ObjectC__c>();
List<ObjectC__c> CpullList =
[SELECT ID, Index__c, ObjectB__r.id
FROM ObjectC__c
WHERE Index__c = :I];
for(ObjectC__c s : CpullList){
List<ObjectA__c> AList =
[SELECT ObjectB__c, Number__c
FROM ObjectA__c
WHERE ObjectB__c = :s.ObjectB__r.Id];
decimal NumSum = 0;
for(ObjectA__c a : AList){
NumSum = a.Number__c + NumSum;
}
s.Num__c = NumSum;
CUpdateList.add(s);
}
update CUpdateList;
}
}
It looks like you are really missing several fundamental concepts at the moment.
The biggest problem you are up against in SFDC development is that "database" operations are very expensive and are strictly limited. It's not just a matter of "best practice": if in a single transaction you exceed these limits -- number of SOQL calls, number of records returned, number of records updated, number of DML statements, etc. -- your transaction will fail. For details, search online for "Salesforce Execution Governors and Limits".
You can write code that works within these limitations, but there is a bit of a learning curve.
First, learn to use collections with SOQL queries to get your SOQL queries out of loops. This is a.k.a. "bulkfication" and it fundamental to SFDC development:
List<ObjectC__c> CpullList =
[SELECT ID, Index__c, ObjectB__r.id
FROM ObjectC__c
WHERE Index__c = :I];
// Create a map with the results of this query.
// key=ObjectC__c.Id, value = Object__c record
Map<Id, ObjectC__c> objCmap = Map<Id, ObjectC__c>(CpullList);
// Build a set of all the Object_B id's from this result set
Set<Id> objBids = new Set<Id>();
for (ObjectC__c record : CpullList) {
objBids.add(record.ObjectB__r.id);
}
// Now you can use only one SOQL query instead of a loop
List<ObjectA> AList = [SELECT ObjectB__c, Number__c
FROM ObjectA__c
WHERE ObjectB__c in:objBids];
Next, use "SOQL aggregate functions" whenever you can. Example: in your code here, you could use "SUM()" and "group by" instead of performing these calculations with loops:
// Get the sum of ObjectA__c.Number__c for each Object B in objBIds
AggregateResult[] groupedResults = [select ObjectB__c,
sum(Number__c) sumA
from ObjectA__c
where ObjectB__c in: objBids
group by ObjectB__c];
for (AggregateResult ar : groupedResults) {
System.debug('Object B Id' + ar.get('Objectb__c'));
System.debug('Sum of ObjectA__c.Number__c' + ar.get('sumA'));
// Here, you might want to build a Map<Id, Integer> sumAmap:
// key=Object B ID, value=sumA
// and then use it along with objCmap to build a collection of Object C's
// for your update statement...
}
You can continue this process and apply these ideas to make the code more efficient.
But even after you have your methods working as efficiently as possible, you still may run into limits due to the number of records you're dealing with. At that point, you will need to learn about the Batchable interface, the Queuable interface and #future calls (how to process a larger number of records, split across transactions) That's really too much to information to cover in a single SO answer.

How to avoid Query Plan re-compilation when using IEnumerable.Contains in Entity Framework LINQ queries?

I have the following LINQ query executed using Entity Framework (v6.1.1):
private IList<Customer> GetFullCustomers(IEnumerable<int> customersIds)
{
IQueryable<Customer> fullCustomerQuery = GetFullQuery();
return fullCustomerQuery.Where(c => customersIds.Contains(c.Id)).ToList();
}
This query is translated into fairly nice SQL:
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[FirstName] AS [FirstName]
-- ...
FROM [dbo].[Customer] AS [Extent1]
WHERE [Extent1].[Id] IN (1, 2, 3, 5)
However, I get a very significant performance hit on a query compilation phase. Calling:
ELinqQueryState.GetExecutionPlan(MergeOption? forMergeOption)
Takes ~50% of the time of each request. Digging deeper, it turned out that query gets re-compiled every time I pass different customersIds.
According to MSDN article, this is an expected behavior because IEnumerable that is used in a query is considered volatile and is part of SQL that is cached. That's why SQL is different for every different combination of customersIds and it always has different hash that is used to get compiled query from cache.
Now the question is: How can I avoid this re-compilation while still querying with multiple customersIds?
This is a great question. First of all, here are a couple of workarounds that come to mind (they all require changes to the query):
First workaround
This one maybe a bit obvious and unfortunately not generally applicable: If the selection of items you would need to pass over to Enumerable.Contains already exists in a table in the database, you can write a query that calls Enumerable.Contains on the corresponding entity set in the predicate instead of bringing the items into memory first. An Enumerable.Contains call over data in the database should result in some kind of JOIN-based query that can be cached. E.g. assuming no navigation properties between Customers and SelectedCustomers, you should be able to write the query like this:
var q = db.Customers.Where(c =>
db.SelectedCustomers.Select(s => s.Id).Contains(c.Id));
The syntax of the query with Any is a bit simpler in this case:
var q = db.Customers.Where(c =>
db.SelectedCustomers.Any(s => s.Id == c.Id));
If you don't already have the necessary selection data stored in the database, you will probably don't want the overhead of having to store it, so you should consider the next workaround.
Second workaround
If you know beforehand that you will have a relatively manageable maximum number of elements in the list you can replace Enumerable.Contains with a tree of OR-ed equality comparisons, e.g.:
var list = new [] {1,2,3};
var q = db.Customers.Where(c =>
list[0] == c.Id ||
list[1] == c.Id ||
list[2] == c.Id );
This should produce a parameterized query that can be cached. If the list varies in size from query to query, this should produce a different cache entry for each list size. Alternatively you could use a list with a fixed size and pass some sentinel value that you know will never match the value argument, e.g. 0, -1, or alternatively just repeat one of the other values. In order to produce such predicate expression programmatically at runtime based on a list, you might want to consider using something like PredicateBuilder.
Potential fixes and their challenges
On one hand, changes necessary to support caching of this kind of query using CompiledQuery explicitly would be pretty complex in the current version of EF. The key reason is that the elements in the IEnumerable<T> passed to the Enumerable.Contains method would have to translate into a structural part of the query for the particular translation we produce, e.g.:
var list = new [] {1,2,3};
var q = db.Customers.Where(c => list.Contains(c.Id)).ToList();
The enumerable β€œlist” looks like a simple variable in C#/LINQ but it needs to be translated to a query like this (simplified for clarity):
SELECT * FROM Customers WHERE Id IN(1,2,3)
If list changes to new [] {5,4,3,2,1}, and we would have to generate the SQL query again!
SELECT * FROM Customers WHERE Id IN(5,4,3,2,1)
As a potential solution, we have talked about leaving generated SQL queries open with some kind of special place holder, e.g. store in the query cache that just says
SELECT * FROM Customers WHERE Id IN(<place holder>)
At execution time, we could pick this SQL from the cache and finish the SQL generation with the actual values. Another option would be to leverage a Table-Valued Parameter for the list if the target database can support it. The first option would probably work ok only with constant values, the latter requires a database that supports a special feature. Both are very complex to implement in EF.
Auto compiled queries
On the other hand, for automatic compiled queries (as opposed to explicit CompiledQuery) the issue becomes somewhat artificial: in this case we compute the query cache key after the initial LINQ translation, hence any IEnumerable<T> argument passed should have already been expanded into DbExpression nodes: a tree of OR-ed equality comparisons in EF5, and usually a single DbInExpression node in EF6. Since the query tree already contains a distinct expression for each distinct combination of elements in the source argument of Enumerable.Contains (and therefore for each distinct output SQL query), it is possible to cache the queries.
However even in EF6 these queries are not cached even in the auto compiled queries case. The key reason for that is that we expect the variability of elements in a list to be high (this has to do with the variable size of the list but is also exacerbated by the fact that we normally don't parameterize values that appear as constants to the query, so a list of constants will be translated into constant literals in SQL), so with enough calls to a query with Enumerable.Contains you could produce considerable cache pollution.
We have considered alternative solutions to this as well, but we haven't implemented any yet. So my conclusion is that you would be better off with the second workaround in most cases if as I said, you know the number of elements in the list will remain small and manageable (otherwise you will face performance issues).
Hope this helps!
As of now, this is still a problem in Entity Framework Core when using the SQL Server Database Provider.
πŸ’‘ Still on Entity Framework 6 (non-core)? skip to the next section.
I wrote QueryableValues to solve this problem in a flexible and performant way; with it you can compose the values from an IEnumerable<T> in your query, like if it were another entity in your DbContext.
In contrast to other solutions out there, QueryableValues achieves this level of performance by:
Resolving with a single round-trip to the database.
Preserving the query's execution plan regardless of the provided values.
Usage example:
// Sample values.
IEnumerable<int> values = Enumerable.Range(1, 10);
// Using a Join.
var myQuery1 =
from e in dbContext.MyEntities
join v in dbContext.AsQueryableValues(values) on e.Id equals v
select new
{
e.Id,
e.Name
};
// Using Contains.
var myQuery2 =
from e in dbContext.MyEntities
where dbContext.AsQueryableValues(values).Contains(e.Id)
select new
{
e.Id,
e.Name
};
You can also compose complex types!
It's available as a nuget package and the project can be found here. It's distributed under the MIT license.
The benchmarks speak for themselves.
An Alternative for Entity Framework 6 (non-core)
πŸŽ‰ NEW! QueryableValues EF6 Edition has arrived!
I'll explain how to manually provide some of the functionality of QueryableValues on this legacy version of Entity Framework, specifically, the ability to compose an IEnumerable<int> with any of your entities in the same way that QueryableValues does on EF Core. You can use this same technique to support collections of other simple types like long, string, etc.
Requirements
Must use the SQL Server provider
Must use the database-first strategy OR you already have a way to map a TVF using the code-first strategy
Instructions Summary
Create a method that takes an IEnumerable<int> and returns XML.
Create a TVF in your database that takes XML and returns a rowset.
Add the TVF to the EDMX using the designer.
Encapsulate the code that glues the functions created on step 1 and 2 and return an IQueryable<int>.
Use the IQueryable<int> in your queries as desired.
Instructions
1. Create a method that takes a IEnumerable<int> and returns XML
This method will serialize the provided values as XML, so later on it can be transmitted as a parameter in your query.
static string GetXml<T>(IEnumerable<T> values)
{
var sb = new StringBuilder();
using (var stringWriter = new System.IO.StringWriter(sb))
{
var settings = new System.Xml.XmlWriterSettings
{
ConformanceLevel = System.Xml.ConformanceLevel.Fragment
};
using (var xmlWriter = System.Xml.XmlWriter.Create(stringWriter, settings))
{
xmlWriter.WriteStartElement("R");
foreach (var value in values)
{
xmlWriter.WriteStartElement("V");
xmlWriter.WriteValue(value);
xmlWriter.WriteEndElement();
}
xmlWriter.WriteEndElement();
}
}
return sb.ToString();
}
If the above method is provided with new[] { 1, 2, 3 }, it will return a XML string with the following structure:
<R><V>1</V><V>2</V><V>3</V></R>
2. Create a TVF in your database that takes XML and returns a rowset
The following table-valued function (TVF) will take the XML created by the previous function and project it as a rowset with a single column (V), that can then be used from SQL Server's side in your query. Must be created in the database associated with your EDMX file, so it can be added to your EDMX model in the next step.
CREATE FUNCTION dbo.udf_GetIntValuesFromXml
(
#Values XML
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
(
SELECT I.value('. cast as xs:integer?', 'int') AS V
FROM #Values.nodes('/R/V') N(I)
)
The above function when provided with the <R><V>1</V><V>2</V><V>3</V></R> XML, will return the following rowset:
V
1
2
3
3. Add the TVF to the EDMX using the designer
Table-Valued Functions (TVFs) - EF Docs
After adding this function to your EDMX model, ensure to save the changes to the EDMX file so that your DbContext generated code is up to date.
4. Encapsulate the code that glues the functions created on step 1 and 2 and return an IQueryable<int>
The following code encapsulates the XML serializer function explained above and everything else you need on the .NET side to make this work:
using System.Collections.Generic;
using System.Linq;
public static class QueryableValuesClassicDbContextExtensions
{
private static string GetXml<T>(IEnumerable<T> values)
{
var sb = new StringBuilder();
using (var stringWriter = new System.IO.StringWriter(sb))
{
var settings = new System.Xml.XmlWriterSettings
{
ConformanceLevel = System.Xml.ConformanceLevel.Fragment
};
using (var xmlWriter = System.Xml.XmlWriter.Create(stringWriter, settings))
{
xmlWriter.WriteStartElement("R");
foreach (var value in values)
{
xmlWriter.WriteStartElement("V");
xmlWriter.WriteValue(value);
xmlWriter.WriteEndElement();
}
xmlWriter.WriteEndElement();
}
}
return sb.ToString();
}
public static IQueryable<int> AsQueryableValues(this IQueryableValuesClassicDbContext dbContext, IEnumerable<int> values)
{
return dbContext.GetIntValuesFromXml(GetXml(values));
}
}
public interface IQueryableValuesClassicDbContext
{
IQueryable<int> GetIntValuesFromXml(string xml);
}
The IQueryableValuesClassicDbContext interface is intended to be explicitly implemented on your DbContext class to provide access to the TVF that was added to the EDMX model.
You can do this by creating a partial class for your DbContext. For example, if your DbContext name is TestDbContext:
using System.Linq;
partial class TestDbContext : IQueryableValuesClassicDbContext
{
IQueryable<int> IQueryableValuesClassicDbContext.GetIntValuesFromXml(string xml)
{
return udf_GetIntValuesFromXml(xml).Select(i => i.Value);
}
}
5. Use the IQueryable<int> in your queries as desired (via AsQueryableValues)
using (var db = new TestDbContext())
{
var valuesQuery = db.AsQueryableValues(new[] { 1, 2, 3, 4, 5 });
var resultsUsingContains = db.MyEntity
.Where(i => valuesQuery.Contains(i.MyEntityID))
.Select(i => new { i.MyEntityID, i.PropA })
.ToList();
var resultsUsingJoin = (
from i in db.MyEntity
join v in valuesQuery on i.MyEntityID equals v
select new { i.MyEntityID, i.PropA }
)
.ToList();
}
Below is the T-SQL generated behind the scenes for the above EF queries. As you can see, it's completely parameterized.
exec sp_executesql N'SELECT
[Extent1].[MyEntityID] AS [MyEntityID],
[Extent1].[PropA] AS [PropA]
FROM [dbo].[MyEntity] AS [Extent1]
WHERE EXISTS (SELECT
1 AS [C1]
FROM [dbo].[udf_GetIntValuesFromXml](#Values) AS [Extent2]
WHERE ([Extent2].[V] = [Extent1].[MyEntityID]) AND ([Extent2].[V] IS NOT NULL)
)',N'#Values nvarchar(4000)',#Values=N'<R><V>1</V><V>2</V><V>3</V><V>4</V><V>5</V></R>'
exec sp_executesql N'SELECT
[Extent1].[MyEntityID] AS [MyEntityID],
[Extent1].[PropA] AS [PropA]
FROM [dbo].[MyEntity] AS [Extent1]
INNER JOIN [dbo].[udf_GetIntValuesFromXml](#Values) AS [Extent2] ON [Extent1].[MyEntityID] = [Extent2].[V]',N'#Values nvarchar(4000)',#Values=N'<R><V>1</V><V>2</V><V>3</V><V>4</V><V>5</V></R>'
Limitations
The provided IEnumerable<int> is enumerated at query build time, not at execution time.
The final query cannot reference more than one IQueryable<T> returned by the AsQueryableValues extension method. This is another limitation around composing the same TVF more than once. EF will create two parameters with the same name, which is illegal and you will get the following error:
A parameter named 'Values' already exists in the parameter collection. Parameter names must be unique in the parameter collection.
Incorrect type used for the XML type parameter of the TVF (notice the use of nvarchar instead of xml in the T-SQL above). This is a deficiency in the EF infrastructure (ObjectParameter) that's used to compose the TVF. Not using the correct parameter type has a detrimental effect in performance due to the implicit casting that must be done by SQL Server.
Conclusion
Despite the limitations, this is still a robust solution when compared to not using parameterized T-SQL queries. To understand the underlying issue that this mitigates you can continue reading here.
Legal Stuff
Feel free to use the code and examples above as you wish. I'm releasing it under the MIT license:
MIT License
Copyright (c) Carlos Villegas (yv989c)
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
I had this exact challenge. Here is how I tackled this problem for either strings or longs in an extension method for IQueryables.
To limit the caching pollution we create the same query with a multitude n of m (configurable) parameters, so 1 * m, 2 * m etc. So if the setting is 15; The queryplans would have either 15, 30, 45 etc parameters, depending on the number of elements in the contains (we don't know in advance, but probably less than 100) limiting the number of query plans to 3 if the biggest contains is less than or equal to 45.
The remaining parameters are filled with a placeholdervalue that (we know) doesn't exists in the database. In this case '-1'
Resulting query part;
... WHERE [Filter1].[SomeProperty] IN (#p__linq__0,#p__linq__1, (...) ,#p__linq__19)
... #p__linq__0='SomeSearchText1',#p__linq__1='SomeSearchText2',#p__linq__2='-1',
(...) ,#p__linq__19='-1'
Usage:
ICollection<string> searchtexts = .....ToList();
//or
//ICollection<long> searchIds = .....ToList();
//this is the setting that is relevant for the resulting multitude of possible queryplans
int itemsPerSet = 15;
IQueryable<MyEntity> myEntities = (from c in dbContext.MyEntities
select c)
.WhereContains(d => d.SomeProperty, searchtexts, "-1", itemsPerSet);
The extension method:
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
namespace MyCompany.Something.Extensions
{
public static class IQueryableExtensions
{
public static IQueryable<T> WhereContains<T, U>(this IQueryable<T> source, Expression<Func<T,U>> propertySelector, ICollection<U> identifiers, U placeholderThatDoesNotExistsAsValue, int cacheLevel)
{
if(!(propertySelector.Body is MemberExpression))
{
throw new ArgumentException("propertySelector must be a MemberExpression", nameof(propertySelector));
}
var propertyExpression = propertySelector.Body as MemberExpression;
var propertyName = propertyExpression.Member.Name;
return WhereContains(source, propertyName, identifiers, placeholderThatDoesNotExistsAsValue, cacheLevel);
}
public static IQueryable<T> WhereContains<T, U>(this IQueryable<T> source, string propertyName, ICollection<U> identifiers, U placeholderThatDoesNotExistsAsValue, int cacheLevel)
{
return source.Where(ContainsPredicateBuilder<T, U>(identifiers, propertyName, placeholderThatDoesNotExistsAsValue, cacheLevel));
}
public static Expression<Func<T, bool>> ContainsPredicateBuilder<T,U>(ICollection<U> ids, string propertyName, U placeholderValue, int cacheLevel = 20)
{
if(cacheLevel < 1)
{
throw new ArgumentException("cacheLevel must be greater than or equal to 1", nameof(cacheLevel));
}
Expression<Func<T, bool>> predicate;
var propertyIsNullable = Nullable.GetUnderlyingType(typeof(T).GetProperty(propertyName).PropertyType) != null;
// fill a list of cachableLevel number of parameters for the property, equal the selected items and padded with the placeholder value to fill the list.
Expression finalExpression = Expression.Constant(false);
var parameter = Expression.Parameter(typeof(T), "x");
/* factor makes sure that this query part contains a multitude of m parameters (i.e. 20, 40, 60, ...),
* so the number of query plans is limited even if lots of users have more than m items selected */
int factor = Math.Max(1, (int)Math.Ceiling((double)ids.Count / cacheLevel));
for (var i = 0; i < factor * cacheLevel; i++)
{
U id = placeholderValue;
if (i < ids.Count)
{
id = ids.ElementAt(i);
}
var temp = new { id };
var constant = Expression.Constant(temp);
var field = Expression.Property(constant, "id");
var member = Expression.Property(parameter, propertyName);
if (propertyIsNullable)
{
member = Expression.Property(member, "Value");
}
var expression = Expression.Equal(member, field);
finalExpression = Expression.OrElse(finalExpression, expression);
}
predicate = Expression.Lambda<Func<T, bool>>(finalExpression, parameter);
return predicate;
}
}
}
This is really a huge problem, and there's no one-size-fits-all answer. However, when most lists are relatively small, diverga's "Second Workaround" works well. I've built a library distributed as a NuGet package to perform this transformation with as little modification to the query as possible:
https://github.com/bchurchill/EFCacheContains
It's been tested out in one project, but feedback and user experiences would be appreciated! If any issues come up please report on github so that I can follow-up.

Hibernate Delete object by id

Which is the best method(performance wise) to delete an object if only its id is available.
HQL. Will executing this HQL load the SessionContext object into hibernate persistence context ?
for(int i=0; i<listOfIds.size(); i++){
Query q = createQuery("delete from session_context where id = :id ");
q.setLong("id", id);
q.executeUpdate();
}
Load by ID and delete.
for(int i=0; i<listOfIds.size(); i++){
SessionContext session_context = (SessionContext)getHibernateTemplate().load(SessionContext.class, listOfIds.get(i));
getHibernateTemplate().delete(session_context) ;
}
Here SessionContext is the object mapped to session_context table.
Or, well off course is there an all together different and better approach ?
Out of the two, the first one is better, where you will save memory. When you want to delete the Entity and you have the id with you, writing a HQL is preferred.
In you case there is a third and better option,
Try the below,
//constructs the list of ids using String buffer, by iterating the List.
String idList = "1,2,3,....."
Query q = createQuery("delete from session_context where id in (:idList) ");
q.setString("idList", idList);
q.executeUpdate();
Now if there are 4 items in the list only one query will be fired, Previously there would be 4.
Note:- For the above to work, session_context should be an independent table.
Btw, say no to that ugly string, there is .setParameterList(), so:
List<Long> idList = Arrays.asList(1L, 2L, 3L);
Query q = createQuery("delete from session_context where id in (:idList) ");
q.setParameterList("idList", idList);
q.executeUpdate();
update
I must update on this, in our environment, in the end it turned out, that using setParameterList gives much worse performance, than creating a string manualy and using setString like #ManuPK suggested.
You should also consider caching - first level (session) and second level cache.
The first option is probably the best if the delete is the only or the first operation in transaction.
If you query for some SessionContext objects then call the HQL to delete then all objects in query cache will be evicted, because hibernate doesn't know which to delete. This is not the case with the second approach.
If you use second level cache then it is even more complicated and highly depends on what you do with SessionContext objects.

LINQ performance

I am reading records from database and check some conditions and store in List<Result>. Result is a class. Then performing LINQ query in List<Result> like grouping, counting etc. So there may be chance that min 50,000 records in List<Result>, so in this whether its better to go for LINQ (or) reinsert the records to db and perform the queries?
Why not store it in an IQueryable instead of a List and using LINQ to SQL or LINQ to Entities, the actual dataset will never be pulled into memory, and the queries will actually go down to the database to run.
Example:
Database db = new Database(); // this is what L2E gives you...
var children = db.Person.Where(p => p.Age < 21); // no actual database query performed
// will do : "select count(*) from Person where Age < 21"
int numChildren = children.Count();
var grouped = children.GroupBy(p => p.Age); // no actual query
int youngest = children.Min(p => p.Age); // performs query
int numYoungest = youngest.Count(p => p.Age == youngest); // performs query.
var youngestNames = children.Where(p => p.Age == youngest).Select(p => p.Name); // no query
var anArray = youngestNames.ToArray(); // performs query
string names = string.join(", ", anArray); // no query of course
I'm currently asking the same kind of thing right now. I don't really know the exact answer either, but from what I know, LINQ is not well know to be fast on objects. Also, since List is not indexed, when you do advance query on them, the backend will probably need to do a lot of computing to get what you asked for. Also, this code is generic, so it means slower execution.
The best thing would be, if you are able, do everything in one query, or even do a startproc to do your processing. Or another possibility, if you are always checking the same initial condition, create a view and do your query directly on this table (instead of reinserting from the client). I think that if you have more than 50,000 results, probably using a list is not a good idea (Memory and Performance).
It probably doesn't answer your question directly, but other than doing benchmark, you won't know. It really depends on what you are doing with the data.

Resources