I'm working on an audit log which saves sessions in RavenDB. Initially, the website for querying the audit logs was responsive enough but as the amount of logged data has increased, the search page became unusable (it times out before returning using default settings - regardless of the query used). Right now we have about 45mil sessions in the table that gets queried but steady state is expected to be around 150mil documents.
The problem is that with this much live data, playing around to test things has become impractical. I hope some one can give me some ideas what would be the most productive areas to investigate.
The index looks like this:
public AuditSessions_WithSearchParameters()
{
Map = sessions => from session in sessions
select new Result
{
ApplicationName = session.ApplicationName,
SessionId = session.SessionId,
StartedUtc = session.StartedUtc,
User_Cpr = session.User.Cpr,
User_CprPersonId = session.User.CprPersonId,
User_ApplicationUserId = session.User.ApplicationUserId
};
Store(r => r.ApplicationName, FieldStorage.Yes);
Store(r => r.StartedUtc, FieldStorage.Yes);
Store(r => r.User_Cpr, FieldStorage.Yes);
Store(r => r.User_CprPersonId, FieldStorage.Yes);
Store(r => r.User_ApplicationUserId, FieldStorage.Yes);
}
The essense of the query is this bit:
// Query input paramters
var fromDateUtc = fromDate.ToUniversalTime();
var toDateUtc = toDate.ToUniversalTime();
sessionQuery = sessionQuery
.Where(s =>
s.ApplicationName == applicationName &&
s.StartedUtc >= fromDateUtc &&
s.StartedUtc <= toDateUtc
);
var totalItems = Count(sessionQuery);
var sessionData =
sessionQuery
.OrderByDescending(s => s.StartedUtc)
.Skip((page - 1) * PageSize)
.Take(PageSize)
.ProjectFromIndexFieldsInto<AuditSessions_WithSearchParameters.ResultWithAuditSession>()
.Select(s => new
{
s.SessionId,
s.SessionGroupId,
s.ApplicationName,
s.StartedUtc,
s.Type,
s.ResourceUri,
s.User,
s.ImpersonatingUser
})
.ToList();
First, to determine the number of pages of results, I count the number of results in my query using this method:
private static int Count<T>(IRavenQueryable<T> results)
{
RavenQueryStatistics stats;
results.Statistics(out stats).Take(0).ToArray();
return stats.TotalResults;
}
This turns out to be very expensive in itself, so optimizations are relevant both here and in the rest of the query.
The query time is not related to the amount of result items in any relevant way. If I use a different value for the applicationName parameter than any of the results, it is just as slow.
One area of improvement could be to use sequential IDs for the sessions. For reasons not relevant to this post, I found it most practical to use guid based ids. I'm not sure if I can easily change IDs of the existing values (with this much data) and I would prefer not to drop the data (but might if the expected impact is large enough). I understand that sequential ids result in better behaving b-trees for the indexes, but I have no idea how significant the impact is.
Another approach could be to include a timestamp in the id and query for documents with ids starting with the string matching enough of the time to filter the result. An example id could be AuditSessions/2017-12-31-24-31-42/bc835d6c-2fba-4591-af92-7aab96339d84. This also requires me to update or drop all the existing data. This of course also has the benefits of mostly sequential ids.
A third approach could be to move old data into a different collection over time, in recognition of the fact that you would most often look at the most recent data. This requires a background job and support for querying across collection time boundaries. It also has the issue that the collection with the old sessions is still slow if you need to access it.
I'm hoping there is something simpler than these solutions, such as modifying the query or the indexed fields in a way that avoids a lot of work.
At a glance, it is probably related to the range query on the StartedUtc.
I'm assuming that you are using exact numbers, so you have a LOT of distinct values there.
If you can, you can dramatically reduce the cost by changing the index to index on a second / minute granularity (which is usually what you are querying on), and then use Ticks, which allow us to use numeric range query.
StartedUtcTicks = new Datetime(session.StartedUtc.Year, session.StartedUtc.Month, session.StartedUtc.Day, session.StartedUtc.Hour, session.StartedUtc.Minute, session.StartedUtc.Second).Ticks,
And then query by the date ticks.
Related
I'm using EF Core but I'm not really an expert with it, especially when it comes to details like querying tables in a performant manner...
So what I try to do is simply get the max-value of one column from a table with filtered data.
What I have so far is this:
protected override void ReadExistingDBEntry()
{
using Model.ResultContext db = new();
// Filter Tabledata to the Rows relevant to us. the whole Table may contain 0 rows or millions of them
IQueryable<Measurement> dbMeasuringsExisting = db.Measurements
.Where(meas => meas.MeasuringInstanceGuid == Globals.MeasProgInstance.Guid
&& meas.MachineId == DBMatchingItem.Id);
if (dbMeasuringsExisting.Any())
{
// the max value we're interested in. Still dbMeasuringsExisting could contain millions of rows
iMaxMessID = dbMeasuringsExisting.Max(meas => meas.MessID);
}
}
The equivalent SQL to what I want would be something like this.
select max(MessID)
from Measurement
where MeasuringInstanceGuid = Globals.MeasProgInstance.Guid
and MachineId = DBMatchingItem.Id;
While the above code works (it returns the correct value), I think it has a performance issue when the database table is getting larger, because the max filtering is done at the client-side after all rows are transferred, or am I wrong here?
How to do it better? I want the database server to filter my data. Of course I don't want any SQL script ;-)
This can be addressed by typing the return as nullable so that you do not get a returned error and then applying a default value for the int. Alternatively, you can just assign it to a nullable int. Note, the assumption here of an integer return type of the ID. The same principal would apply to a Guid as well.
int MaxMessID = dbMeasuringsExisting.Max(p => (int?)p.MessID) ?? 0;
There is no need for the Any() statement as that causes an additional trip to the database which is not desirable in this case.
I have this Linq query that translates very oddly to SQL. I get the correct results but there must be a better way. So question 1 is:
Why is it that in SQL I get no group by, no count and all of the
columns are returned instead of just 2; and then the results in C# are correct? (I checked with profiler).
and question 2 is:
I would like to modify the query slightly so that I get also the
results where count is 0. At the moment I only get where counts > 0
because of the group by.
LINQ:
List<Tuple<string, int>> countPerType = db1.Audits
.OrderBy(p => p.CreatedBy)
.GroupBy(o => new { o.Type, o.CreatedBy })
.ToList()
.Select(g => new Tuple<string, int>(g.Select(f => f.CreatedBy + ',' + f.Type).FirstOrDefault(),
(int?)g.Count() ?? 0))
.ToList();
Note that if I remove the .ToList() in the middle, I get exception "only parameterless constructors and initializers are supported in linq to entities".
Thanks for your input
You run into several problems. I think the cause of this is that you aren't aware of the difference between queries that are AsEnumerable and queries that are AsQueryable.
AsEnumerable queries contain all information to enumerate over the elements in the query. The query will be executed by your process.
An AsQueryable query, contains a Expression and a Provider. The Provider knows who will execute the query, and how to communicate with this executer. Quite often the executer will be a database, but it can be other things, like internet queries, jswon files etc.
In your case the executer will be a database, the language will be SQL.
When the GetEnumerator() function of your IQueryable is called, the Provider is ordered to translate the Expression into the language that the executor knows. The translated query is sent to the executor and the returned data is put into an Enumerator (not IEnumerable!)
Of course SQL does not know what a System.Tuple is, nor does it know functions like String.operator+
Therefore your Provider can't translate your expression into SQL. That is the reason you have to do your first ToList()
You can't make queries as IQueryable with any of your own functions, and only a limited amount of .NET functions.
See this list of supported and unsupported Linq methods
It is not advise to use ToList() in this stadium of your query, because it enumerates all elements of your sequence, will in fact you only need an enumerator. It could be that during the rest of your query you'd only want a few elements. In that case it would be a waste to enumerate over all of them to create a list, and then to enumerate again to do the rest of your LINQ.
Instead of ToList() use Enumerable.AsEnumerable(). This will bring all data of the query to local memory and create an IEnumerable of it: the elements are not enumerated yet. This will allow you to call local functions with the rest of your query.
Another problem is that you transport way more data to local memory than you plan to use. One of the slower parts of database queries is the transport of data to your process. You should minimize the amount of data.
You took all Audits, and created groups of Audits that have the same values for (Type, CreatedBy). In other words: all Audits in the same group have the same values for (Type, CreatedBy). This value is also the Key of the group.
You don't want all Audits locally, you only want the Key of the group and the number of elements of this group (= the number of audits that have (Type, CreatedBy) equal to the key.
This is the only data you need to transport to local memory: Type, CreatedBy and the number of audits in the group:
var result = db1.Audits.GroupBy(o => new { o.Type, o.CreatedBy })
.Select(group => new
{
Type = group.Key.Type,
CreatedBy = group.Key.CreatedBy,
AuditCount = group.Count(),
})
.OrderBy(item => item.CreatedBy)
// the data that is left is the data you need locally
// bring to local memory:
.AsEnumerable()
// if you want you can put Type and CreatedBy into one string
.Select(item => new
{
AuditType = item.Type + item.CreatedBy,
AuditCount = item.AuditCount,
});
I chose not to put the result in a Tuple, because you would lose the help from the compiler if you mix up fields. But if you really want to suit yourself.
Is there a way to get all the objects in key/value format which are under one similar secondary index value. I know we can get the list of keys for one secondary index (bucket/{{bucketName}}/index/{{index_name}}/{{index_val}}). But somehow my requirements are such that if I can get all the objects too. I don't want to perform a separate query for each key to get the object details separately if there is way around it.
I am completely new to Riak and I am totally a front-end guy, so please bear with me if something I ask is of novice level.
In Riak, it's sometimes the case that the better way is to do separate lookups for each key. Coming from other databases this seems strange, and likely inefficient, however you may find your query will be faster over an index and a bunch of single object gets, than a map/reduce for all the objects in a single go.
Try both these approaches, and see which turns out fastest for your dataset - variables that affect this are: size of data being queried; size of each document; power of your cluster; load the cluster is under etc.
Python code demonstrating the index and separate gets (if the data you're getting is large, this method can be made memory-efficient on the client, as you don't need to store all the objects in memory):
query = riak_client.index("bucket_name", 'myindex', 1)
query.map("""
function(v, kd, args) {
return [v.key];
}"""
)
results = query.run()
bucket = riak_client.bucket("bucket_name")
for key in results:
obj = bucket.get(key)
# .. do something with the object
Python code demonstrating a map/reduce for all objects (returns a list of {key:document} objects):
query = riak_client.index("bucket_name", 'myindex', 1)
query.map("""
function(v, kd, args) {
var obj = Riak.mapValuesJson(v)[0];
return [ {
'key': v.key,
'data': obj,
} ];
}"""
)
results = query.run()
I am reading records from database and check some conditions and store in List<Result>. Result is a class. Then performing LINQ query in List<Result> like grouping, counting etc. So there may be chance that min 50,000 records in List<Result>, so in this whether its better to go for LINQ (or) reinsert the records to db and perform the queries?
Why not store it in an IQueryable instead of a List and using LINQ to SQL or LINQ to Entities, the actual dataset will never be pulled into memory, and the queries will actually go down to the database to run.
Example:
Database db = new Database(); // this is what L2E gives you...
var children = db.Person.Where(p => p.Age < 21); // no actual database query performed
// will do : "select count(*) from Person where Age < 21"
int numChildren = children.Count();
var grouped = children.GroupBy(p => p.Age); // no actual query
int youngest = children.Min(p => p.Age); // performs query
int numYoungest = youngest.Count(p => p.Age == youngest); // performs query.
var youngestNames = children.Where(p => p.Age == youngest).Select(p => p.Name); // no query
var anArray = youngestNames.ToArray(); // performs query
string names = string.join(", ", anArray); // no query of course
I'm currently asking the same kind of thing right now. I don't really know the exact answer either, but from what I know, LINQ is not well know to be fast on objects. Also, since List is not indexed, when you do advance query on them, the backend will probably need to do a lot of computing to get what you asked for. Also, this code is generic, so it means slower execution.
The best thing would be, if you are able, do everything in one query, or even do a startproc to do your processing. Or another possibility, if you are always checking the same initial condition, create a view and do your query directly on this table (instead of reinserting from the client). I think that if you have more than 50,000 results, probably using a list is not a good idea (Memory and Performance).
It probably doesn't answer your question directly, but other than doing benchmark, you won't know. It really depends on what you are doing with the data.
I have a parent-child table relationship. In a repository, I'm doing this:
return (from p in _ctx.Parents
.Include( "Children" )
select p).AsQueryable<Parent>();
Then in a filter, I want to filter the parent by a list of child ids:
IQueryable<Parent> qry; // from above
List<int> ids; // huge list (8500)
var filtered =
from p in qry.Where( p => p.Children.Any(c => ids.Contains(c.ChildId)) ) select s;
My list of ids is huge. This generates a simple SQL statement that does have a huge list of ids "in (1,2,3...)", but it takes no appreciable time to run by itself. EF, however, takes about a full minute just to generate the statement. I proved this by setting a breakpoint and calling:
((ObjectQuery<Parent>)filtered).ToTraceString();
This takes all the time. Is the problem in my last linq statement? I don't know any other way to do the equivalent of Child.ChildId in (ids). And even if my linq statement is bad, how in the world should this take so long?
Unfortunately, building queries in Linq to Entities is a pretty heavy hit, but I've found it usually saves time due to the ability to build queries from their component pieces before actually hitting the database.
It is likely that the way they implement the Contains method uses an algorithm that assumes that Contains is generally used for a relatively small set of data. According to my tests, the amount of time it takes per ID in the list begins to skyrocket at around 8000.
So it might help to break your query into pieces. Group them into groups of 1000 or less, and concatenate a bunch of Where expressions.
var idGroups = ids.GroupBy(i => i / 1000);
var q = Parents.Include("Children").AsQueryable();
var newQ = idGroups.Aggregate(q,
(s, g) => s.Concat(
q.Where(w => w.Children.Any(wi => g.Contains(wi.ChildId)))));
This speeds things up significantly, but it might not be significantly enough for your purposes, in which case you'll have to resort to a stored procedure. Unfortunately, this particular use case just doesn't fit into "the box" of expected Entity Framework behavior. If your list of ids could begin as a query from the same Entity Context, Entity Framework would have worked fine.
Re-write your query in Lambda syntax and it will cut the time by as much as 3 seconds (or at least it did for my EF project).
return _ctx.Parents.Include( "Children" ).AsQueryable<Parent>();
and
IQueryable<Parent> qry; // from above
List<int> ids; // huge list (8500)
var filtered = qry.Where( p => p.Children.Any(c => ids.Contains(c.ChildId)) );