Having to call .ToList() in entity framework 4.1 before .Skip() and .Take() on large table - asp.net-mvc-3

I'm trying to do something a little clever with my app. I have a table, Adverts - which contains info on cars: model, mileage etc. The table is related to a few other tables via foreign keys e.g. model name is retrieved through a foreign key linking to a "VehicleModels" table etc.
Within the app's "Entities" dir (classes which map to tables in the database) I have one for the Adverts table, Advert.cs. This has a couple of properties which EF has been told to ignore (in fluent api) as they don't map to actual fields in the Adverts table.
The idea behind these fields is to store the calculated distance from a postcode (zip code) the user enters in a search form which filters through the Adverts table if they only want to see cars available within a certain radius. e.g.:
IQueryable<Advert> FilteredAdverts = repository.Adverts
.Where(am => mfr == "" || am.Manufacturer == mfr) &&
(am => model == etc etc...)
Later on, to calculate the distance the code resembles:
if (userPostcode != null) {
foreach (var ap in FilteredAdverts.ToList()) {
distmiles = //calculate distance in miles
distkm = //calculate distance in km
ap.DistanceMiles = Convert.ToInt32(distmiles);
ap.DistanceKm = Convert.ToInt32(distkm);
}
}
The problem I'm having is that in order to assign values to these two fields, I'm having to use .ToList() which is pulling all rows from the table. Works ok if there are only a few rows, but when there are ~1,000 it takes approx. 2.2 seconds, when I increased it to about 12,000 rows it took 32 seconds for the page to load when no filters were applied i.e. all active adverts returned.
The reason I'm pulling all adverts before calling .Skip and .Take to display them is that the filters available in the search form are based on possible options of all current adverts that are active i.e. have time remaining, rather than just selecting a list of manufacturers from the manufacturers table (where a user could choose a manufacturer for which there are no search results). e.g.
VehicleManufacturers = (from vm in FilteredAdverts.Select(x => x.VehicleManufacturer).Distinct().OrderBy(x => x)
select new SearchOptionsModel
{
Value = vm,
Text = vm,
Count = FilteredAdvertsVM.Where(x => x.VehicleManufacturer == vm).Count(),
})
.... filters for model, mileage etc
To get an idea of what I'm trying to achieve - take a look at the search form on the Autotrader website.
Once all the filters are applied, just before the model is passed to the view, .Skip and .Take are applied, but of course by this time all rows have been pulled.
My question is, how do I go about redoing this? Is there a better method to make use of these non-mapped properties in my Advert entity class? I'm working on my home PC - C2D # 3.4GHz, 2GB ram - would the slow queries run ok on a propert web host ?

You cannot use server-side paging on a client side function. That's the short answer. Assuming I understand your need correctly (to filter a list based on proximity to a given zip code), a solution I've used in the past is storing each 'Advert' record with a lat/long for that record's zip code. This data is persisted.
Then, when it comes time to query, calculate a bounding box (lat1, lng1, lat2, lng2) based on X distance from the center (user provided zip code) and filter the query results based on records whose lat/lng fits within this box. You can then apply client side calculations to further and more accurately filter the list, but using this method, you can establish a base filter to minimize the number of records pulled.
Edit: You can order the results of the query based on the absolute distance from the center point in terms of abs(latU-latR) and abs(lngU-lngR) where latU/lngU is the lat/lng of the user provided zip code and latR/lngR is the lat/lng of the record in the db.

Related

Tableau tooltip incorrect when toggling through quick filter

Link to workbook on public tableau
I created calculated value to determine grade for business, and this is colormap (in tab Grade per Location)
And when I hover over datapoints on map (tab Map), it displays correct Grade, i.e. D for Shish Boom Bah Car Wash
But as soon as I select any location from a drop-down, all grades are A
Tot_Avg is calculated like this:
{ EXCLUDE [Location (Loc)] : AVG([Rating]) }
Avg_Rating like this:
AVG([Rating])
And here are the conditions for receiving an A:
IF [Avg_Rating] > ATTR([Tot_Avg]) - (.10 * ATTR([Tot_Avg]))
THEN "A"
How to troubleshoot?
I think your confusion is in what that EXCLUDE is doing. It is NOT ignoring filters. It's just saying not to group by Location when aggregating AVG([Rating]). When you filter out all but one location, AVG([Rating]) and { EXCLUDE [Location (Loc)] : AVG([Rating]) } become equivalent, because with either calculation, you're averaging for all points in your filtered partition.
As a result, your condition for receiving an A will always be true if there's only one location. (Check the math: X > X - .1X → X > .9X)
Here's a different way to get what you're after. Make a calculated field (I'll call it Location Filter):
LOOKUP(ATTR([Location (Loc)]),0)
Then trash your Location filter and replace it with that field. We're doing something sneaky here - we're making the exact same filter as we had before, but we're disguising it as a table calculation (by using LOOKUP()). Tableau doesn't execute table calculations until after it's created the filtered partition, so we've tricked it into letting us use every location while still just examining one.

How could I quickly look-up items in a List of loaded entities

I have built an MVC 5 application, using EF 6 to query the database. One page show a cross table of two dimensions: substances against properties of these substances. It is rendered as an html table.Many cells do not have a value. This is what it looks like:
sub 1 sub 2 sub 3
prop A 1.0
prop B 1.5 X
prop C 0.6 Y
The cell values are actually more complex, including tool tips, footnotes, etc.
I implemented the generation of the html table, by the following steps:
create a list of unique properties;
create a list of unique substances;
loop through the properties;
render a row for each;
loop through the substances;
See if there is a value for the combination of property and substances;
render the cell's value or an empty one.
Using the ANTS performance profiler, I found out that step 6 has a huge performance issue with increasing numbers of substances and properties, the hit count exploding to hundreds of millions, with a few hundred substances and a few tens of properties (the largest selection the user can make). The execution time is many minutes. It seems to scale N(substances)^2 * N(properties)^2.
The code looks like:
Value currentValue =
values.Where(val => val.substance.Id == currentSubstanceId
&& val.property.Id == currentPropertyId).SingleOrDefault();
where values is a List and Value is an entity, which I read from to render the cells. values had been pre-loaded from the database and no queries are shown by the SQL Server Profiler.
Since not all cells have a value, I thought it best to loop through the row and columns and see if there is a value. I cannot just loop through the list of values.
What could I try to improve this? I thought about:
Create some sort of C# object, using the substance.Id and property.Id as a compound key and fill it from the List object. Which would be fastest?
Create some Linq query which returns an object which already contains the empty cells, like (substance cross join properties) left join values. I could do this in SQL easily, but could this be done with Linq? Could the object which stores the result have the Value as a member field, so I can still use it to render the cells?
Stop pre-loading and just run a database query for the value of each combination, possibly benefiting from database indexes.
I am considering restricting the number of substances and properties the user may select, but I would rather not do that.
Addtional info
As requested by C.Zonnenberg, some more info about the query.
The query to fill the list of values is basically as follows:
I create an IQueryable to which I add filters for requested substances and properties. I then include the substances, property and value details, found in related entities. I then execute query.ToList(). The actual SQL query, as seen by the SQL Profiler looks complex, involving SubstanceId IN () and PropertyId IN (), but it takes far less then a second to execute.
It returns a list of proxies, like: {System.Data.Entity.DynamicProxies.SubstancePropertyValue_078F758A4FF9831024D2690C4B546F07240FAC82A1E9D95D3826A834DCD91D1E}
I think your best bet is your first option. But to do that efficiently I would also modify the source data (values) and turn it into a dictionary, so you have a structure that's optimized for indexed lookup:
var dict = values.ToDictionary(e =>
Tuple.Create(e.substance.id, e.propertyid),
e => e.Value);
Then for each cell:
Value currentValue ;
dict.TryGetValue(Tuple.Create(currentSubstanceId, currentPropertyId),
out currentValue );
Further, you may benefit from parallelization by fetching the cell values in a Parallel.ForEach looping through all substances, for instance.

How to build a custom key for matching two queries, using Linq-To-Entities

I want to match in-memory entities to data from DB-tables and return a new in-memory DTO with a subset of that matched information. Now, matching involves two columns, thus I am building a new key on the fly. This works, as long as I execute the queries before building the keys, effectively using Linq-To-Objects for the matching.
When not executing the query right away, I receive a runtime exception as described by this MSDN article.
Here is my code and data model, simplified. I have
Rooms (as IEnumerable<Room>, already in memory)
Areas (as IEnumerable<Room>, already in memory)
Alarms (from the DB, as IQueryable from the context)
Alarms are tied to Areas and LocationIds. Rooms can have multiple Areas, and have one LocationId.
I want to build a set of Alarms occurred in a set of Rooms. This involves matching the Alarm's Area and LocationsId to each Room's LocationId and the Areas.
from area in allAreas
let alarmKey = area.AreaName + area.Room.LocationId //AreaName is String, LocationId is integer
//....
However, this line involves a not supported cast form int to String. How to create the key?
If you don't mind a number of leading spaces in LocationId you can do
let alarmKey = area.AreaName +
SqlFunctions.StringConvert((double)area.Room.LocationId)
SqlFunctions is in System.Data.Objects.SqlClient.

Is it a good idea to store and access an active query resultset in Coldfusion vs re-quering the database?

I have a product search engine using Coldfusion8 and MySQL 5.0.88
The product search has two display modes: Multiple View and Single View.
Multiple displays basic record info, Single requires additional data to be polled from the database.
Right now a user does a search and I'm polling the database for
(a) total records and
(b) records FROM to TO.
The user always goes to Single view from his current resultset, so my idea was to store the current resultset for each user and not have to query the database again to get (waste a) overall number of records and (waste b) a the single record I already queried before AND then getting the detail information I still need for the Single view.
However, I'm getting nowhere with this.
I cannot cache the current resultset-query, because it's unique to each user(session).
The queries are running inside a CFINVOKED method inside a CFC I'm calling through AJAX, so the whole query runs and afterwards the CFC and CFINVOKE method are discarded, so I can't use query of query or variables.cfc_storage.
So my idea was to store the current resultset in the Session scope, which will be updated with every new search, the user runs (either pagination or completely new search). The maximum results stored will be the number of results displayed.
I can store the query allright, using:
<cfset Session.resultset = query_name>
This stores the whole query with results, like so:
query
CACHED: false
EXECUTIONTIME: 2031
SQL: SELECT a.*, p.ek, p.vk, p.x, p.y
FROM arts a
LEFT JOIN p ON
...
LEFT JOIN f ON
...
WHERE a.aktiv = "ja"
AND
... 20 conditions ...
SQLPARAMETERS: [array]
1) ... 20+ parameters
RESULTSET:
[Record # 1]
a: true
style: 402
price: 2.3
currency: CHF
...
[Record # 2]
a: true
style: 402abc
...
This would be overwritten every time a user does a new search. However, if a user wants to see the details of one of these items, I don't need to query (total number of records & get one record) if I can access the record I need from my temp storage. This way I would save two database trips worth 2031 execution time each to get data which I already pulled before.
The tradeoff would be every user having a resultset of up to 48 results (max number of items per page) in Session.scope.
My questions:
1. Is this feasable or should I requery the database?
2. If I have a struture/array/object like a the above, how do I pick the record I need out of it by style number = how do I access the resultset? I can't just loop over the stored query (tried this for a while now...).
Thanks for help!
KISS rule. Just re-query the database unless you find the performance is really an issue. With the correct index, it should scales pretty well. When the it is an issue, you can simply add query cache there.
QoQ would introduce overhead (on the CF side, memory & computation), and might return stale data (where the query in session is older than the one on DB). I only use QoQ when the same query is used on the same view, but not throughout a Session time span.
Feasible? Yes, depending on how many users and how much data this stores in memory, it's probably much better than going to the DB again.
It seems like the best way to get the single record you want is a query of query. In CF you can create another query that uses an existing query as it's data source. It would look like this:
<cfquery name="subQuery" dbtype="query">
SELECT *
FROM Session.resultset
WHERE style = #SelectedStyleVariable#
</cfquery>
note that if you are using CFBuilder, it will probably scream Error at you for not having a datasource, this is a bug in CFBuilder, you are not required to have a datasource if your DBType is "query"
Depending on how many records, what I would do is have the detail data stored in application scope as a structure where the ID is the key. Something like:
APPLICATION.products[product_id].product_name
.product_price
.product_attribute
Then you would really only need to query for the ID of the item on demand.
And to improve the "on demand" query, you have at least two "in code" options:
1. A query of query, where you query the entire collection of items once, and then query from that for the data you need.
2. Verity or SOLR to index everything and then you'd only have to query for everything when refreshing your search collection. That would be tons faster than doing all the joins for every single query.

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Resources