Repeated uses of TOP in query - ms-access-2013

Finding TOP items in query
I have an Access query with fields (in simplified form) name, type, value. I need to extract the top x records (according to value) FOR EVERY PAIR (name, type) with x depending on the pair. The query already has the column "value" sorted for each pair.
Solution 1. Do separate queries per pair, take the top x in each and build the union of the queries. Wrong! the number of pairs is large, Access can't handle the resulting query.
Solution 2. Add an extra column to the query, call it "Valid" and set it to True in all records. Then use VBA to traverse the recordset items of the query one by one and set Valid to False for the non-top items. Then do an additional query dropping the false records. Wrong again, the recordset is not editable in VBA (even though "Valid" has nothing to do with any tables used in the query). Yes, I opened the recordset in VBA with dbOpenDynaset -- no dice.
Any ideas? Thanks

Related

How could I quickly look-up items in a List of loaded entities

I have built an MVC 5 application, using EF 6 to query the database. One page show a cross table of two dimensions: substances against properties of these substances. It is rendered as an html table.Many cells do not have a value. This is what it looks like:
sub 1 sub 2 sub 3
prop A 1.0
prop B 1.5 X
prop C 0.6 Y
The cell values are actually more complex, including tool tips, footnotes, etc.
I implemented the generation of the html table, by the following steps:
create a list of unique properties;
create a list of unique substances;
loop through the properties;
render a row for each;
loop through the substances;
See if there is a value for the combination of property and substances;
render the cell's value or an empty one.
Using the ANTS performance profiler, I found out that step 6 has a huge performance issue with increasing numbers of substances and properties, the hit count exploding to hundreds of millions, with a few hundred substances and a few tens of properties (the largest selection the user can make). The execution time is many minutes. It seems to scale N(substances)^2 * N(properties)^2.
The code looks like:
Value currentValue =
values.Where(val => val.substance.Id == currentSubstanceId
&& val.property.Id == currentPropertyId).SingleOrDefault();
where values is a List and Value is an entity, which I read from to render the cells. values had been pre-loaded from the database and no queries are shown by the SQL Server Profiler.
Since not all cells have a value, I thought it best to loop through the row and columns and see if there is a value. I cannot just loop through the list of values.
What could I try to improve this? I thought about:
Create some sort of C# object, using the substance.Id and property.Id as a compound key and fill it from the List object. Which would be fastest?
Create some Linq query which returns an object which already contains the empty cells, like (substance cross join properties) left join values. I could do this in SQL easily, but could this be done with Linq? Could the object which stores the result have the Value as a member field, so I can still use it to render the cells?
Stop pre-loading and just run a database query for the value of each combination, possibly benefiting from database indexes.
I am considering restricting the number of substances and properties the user may select, but I would rather not do that.
Addtional info
As requested by C.Zonnenberg, some more info about the query.
The query to fill the list of values is basically as follows:
I create an IQueryable to which I add filters for requested substances and properties. I then include the substances, property and value details, found in related entities. I then execute query.ToList(). The actual SQL query, as seen by the SQL Profiler looks complex, involving SubstanceId IN () and PropertyId IN (), but it takes far less then a second to execute.
It returns a list of proxies, like: {System.Data.Entity.DynamicProxies.SubstancePropertyValue_078F758A4FF9831024D2690C4B546F07240FAC82A1E9D95D3826A834DCD91D1E}
I think your best bet is your first option. But to do that efficiently I would also modify the source data (values) and turn it into a dictionary, so you have a structure that's optimized for indexed lookup:
var dict = values.ToDictionary(e =>
Tuple.Create(e.substance.id, e.propertyid),
e => e.Value);
Then for each cell:
Value currentValue ;
dict.TryGetValue(Tuple.Create(currentSubstanceId, currentPropertyId),
out currentValue );
Further, you may benefit from parallelization by fetching the cell values in a Parallel.ForEach looping through all substances, for instance.

Salesforce SOQL query length and efficiency

I am trying to solve a problem of deleting only rows matching two criteria, each being a list of ids. Now these Ids are in pairs, if the item to be deleted has one, it must have the second one in the pair, so just using two in clauses will not work. I have come up with two solutions.
1) Use the two in clauses but then loop over the items and check that the two ids in question appear in the correct pairing.
I.E.
for(Object__c obj : [SELECT Id FROM Object__c WHERE Relation1__c in :idlist1 AND Relation2__c in:idlist2]){
if(preConstructedPairingsAsString.contains(''+obj.Relation1__c+obj.Relation2__c)){
listToDelete.add(obj);
}
}
2) Loop over the ids and build an admittedly long query.
I like the second choice because I only get the items I need and can just throw the list into delete but I know that salesforce has hangups with SOQL queries. Is there a penalty to the second option? Is it better to build and query off a long string or to get more objects than necessary and filter?
In general you want to put as much logic as you can into soql queries because that won't use any script statements and they execute faster than your code. However, there is a 10k character limit on soql queries (can be raised to 20k) so based on my back of the envelope calculations you'd only be able to put in 250 id pairs or so before hitting that limit.
I would go with option 1 or if you really care about efficiency you can create a formula field on the object that pairs the ids and filter on that.
formula: relation1__c + '-' + relation2__c
for(list<Object__c> objs : [SELECT Id FROM Object__c WHERE formula__c in :idpairs]){
delete objs;
}

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Windows Azure Paging Large Datasets Solution

I'm using Windows Azure Table Storage to store millions of entities, however I'm trying to figure out the best solution that easily allows for two things:
1) a search on an entity, will retrieve that entity and at least (pageSize) number of entities either side of that entity
2) if there are more entities beyond (pageSize) number of entities either side of that entity, then page next or page previous links are shown, this will continue until either the start or end is reached.
3) the order is reverse chronological order
I've decided that the PartitionKey will be the Title provided by the user as each container is unique in the system. The RowKey is Steve Marx's lexiographical algorithm:
http://blog.smarx.com/posts/using-numbers-as-keys-in-windows-azure
which when converted to javascript instead of c# looks like this:
pad(new Date(100000000 * 86400000).getTime() - new Date().getTime(), 19) + "_" + uuid()
uuid() is a javascript function that returns a guid and pad adds zeros up to 19 chars in length. So records in the system look something like this:
PK RK
TEST 0008638662595845431_ecf134e4-b10d-47e8-91f2-4de9c4d64388
TEST 0008638662595845432_ae7bb505-8594-43bc-80b7-6bd34bb9541b
TEST 0008638662595845433_d527d215-03a5-4e46-8a54-10027b8e23f8
TEST 0008638662595845434_a2ebc3f4-67fe-43e2-becd-eaa41a4132e2
This pattern allows for every new entity inserted to be at the top of the list which satisfies point number 3 above.
With a nice way of adding new records in the system I thought then I would create a mechanism that looks at the first half of the RowKey i.e. 0008638662595845431_ part and does a greater than or less than comparison depending on which direction of the already found item. In other words to get the row immediately before 0008638662595845431 I would do a query like so:
var tableService = azure.createTableService();
var minPossibleDateTimeNumber = pad(new Date(-100000000*86400000).getTime() - new Date().getTime(), 19);
tableService.getTable('testTable', function (error) {
if (error === null) {
var query = azure.TableQuery
.select()
.from('testTable')
.where('PartitionKey eq ?', 'TEST')
.and('RowKey gt ?', minPossibleDateTimeNumber + '_')
.and('RowKey lt ?', '0008638662595845431_')
.and('Deleted eq ?', 'false');
If the results returned are greater than 1000 and azure gives me a continuation token, then I thought I would remember the last items RowKey i.e. the number part 0008638662595845431. So now the next query will have the remembered value as the starting value etc.
I am using Windows Azure Node.Js SDK and language is javascript.
Can anybody see gotcha's or problems with this approach?
I do not see how this can work effectively and efficiently, especially to get the rows for a previous page.
To be efficient, the prefix of your “key” needs to be a serially incrementing or decrementing value, instead of being based on a timestamp. A timestamp generated value would have duplicates as well as holes, making mapping page size to row count at best inefficient and at worst difficult to determine.
Also, this potential algorithm is dependent on a single partition key, destroying table scalability.
The challenge here would be to have a method of generating a serially incremented key. One solution is to use a SQL database and performing an atomic update on a single row, such that an incrementing or decrementing value is produced in sequence. Something like UPDATE … SET X = X + 1 and return X. Maybe using a stored procedure.
So the key could be a zero left padded serially generated number. Split such that say the first N digits of the number is the partition key and remaining M digits are the row key.
For example
PKey RKey
00001 10321
00001 10322
….
00954 98912
Now, since the rows are in sequence it is possible to write a query with the exact key range for the page size.
Caveat. There is a small risk of a failure occurring between generating a serial key and writing to table storage. In which case, there may be holes in the table. However, your paging algorithm should be able to detect and work around such instances quite easily by specify a page size slightly larger than necessary or by retrying with an adjusted range.

How do I sort, group a query properly that returns a tuple of an orm object and a custom column?

I am looking for a way to have a query that returns a tuple first sorted by a column, then grouped by another (in that order). Simply .sort_by().group_by() didn't appear to work. Now I tried the following, which made the return value go wrong (I just got the orm object, not the initial tuple), but read for yourself in detail:
Base scenario:
There is a query which queries for test orm objects linked from the test3 table through foreign keys.
This query also returns a column named linked that either contains true or false. It is originally ungrouped.
my_query = session.query(test_orm_object)
... lots of stuff like joining various things ...
add_column(..condition that either puts 'true' or 'false' into the column..)
So the original return value is a tuple (the orm object, and additionally the true/false column).
Now this query should be grouped for the test orm objects (so the test.id column), but before that, sorted by the linked column so entries with true are preferred during the grouping.
Assuming the current unsorted, ungrouped query is stored in my_query, my approach to achieve this was this:
# Get a sorted subquery
tmpquery = my_query.order_by(desc('linked')).subquery()
# Read the column out of the sub query
my_query = session.query(tmpquery).add_columns(getattr(tmpquery.c,'linked').label('linked'))
my_query = my_query.group_by(getattr(tmpquery.c, 'id')) # Group objects
The resulting SQL query when running this is (it looks fine to me btw - the subquery 'anon_1' is inside itself properly sorted, then fetched and its id aswell as the 'linked' column is extracted (amongst a few other columns SQLAlchemy wants to have apparently), and the result is properly grouped):
SELECT anon_1.id AS anon_1_id, anon_1.name AS anon_1_name, anon_1.fk_test3 AS anon_1_fk_test3, anon_1.linked AS anon_1_linked, anon_1.linked AS linked
FROM (
SELECT test.id AS id, test.name AS name, test.fk_test3 AS fk_test3, CASE WHEN (anon_2.id = 87799534) THEN 'true' ELSE 'false' END AS linked
FROM test LEFT OUTER JOIN (SELECT test3.id AS id, test3.fk_testvalue AS fk_testvalue
FROM test3)
AS anon_2 ON anon_2.fk_testvalue = test.id ORDER BY linked DESC
)
AS anon_1 GROUP BY anon_1.id
I tested it in phpmyadmin, where it gave me, as expected, the id column (for the orm object id), then the additional columns SQL_Alchemy seems to want there, and the linked column. So far, so good.
Now my expected return values would be, as they were from the original unsorted, ungrouped query:
A tuple: 'test' orm object (anon_1.id column), 'true'/'false' value (linked column)
The actual return value of the new sorted/grouped query is however (the original query DOES indeed return a touple before the code above is applied):
'test' orm object only
Why is that so and how can I fix it?
Excuse me if that approach turns out to be somewhat flawed.
What I actually want is, have the original query simply sorted, then grouped without touching the return values. As you can see above, my attempt was to 'restore' the additional return value again, but that didn't work. What should I do instead, if this approach is fundamentally wrong?
Explanation for the subquery use:
The point of the whole subquery is to force SQLAlchemy to execute this query separately as a first step.
I want to order the results first, and then group the ordered results. That seems to be hard to do properly in one step (when trying manually with SQL I had issues combining order and group by in one step as I wanted).
Therefore I don't simply order, group, but I order first, then subquery it to enforce that the order step is actually completed first, and then I group it.
Judging from manual PHPMyAdmin tests with the generated SQL, this seems to work fine. The actual problem is that the original query (which is now wrapped as the subquery you were confused about) had an added column, and now by wrapping it up as a subquery, that column is gone from the overall result. And my attempt to readd it to the outer wrapping failed.
It would be much better if you provided examples. I don't know if these columns are in separate tables or what not. Just looking at your first paragraph, I would do something like this:
a = session.query(Table1, Table2.column).\
join(Table2, Table1.foreign_key == Table2.id).\
filter(...).group_by(Table2.id).order_by(Table1.property.desc()).all()
I don't know exactly what you're trying to do since I need to look at your actual model, but it should look something like this with maybe the tables/objs flipped around or more filters.

Resources