We have a database with 2,00,000 vendor in 100 plus category, if someone visit the website we want to allow them to select a category and show them 25 Vendor per page, first we kept order by VendorId but it always use to get first 25, but we removed it, but now in paging it sometime repeat the vendor, is there a way to get random 25 vendor and also keep the paging.
Regards
you can randomize your result but everytime you dot he query, it will create new random list so unless you randomize and save the randomized state in your Code and page over it, it cant be done straightforward way.
refer, SQL Query results pagination with random Order by in SQL Server 2008
I believe this requirement is impossible to implement if a new random order is needed every time, there needs to be good performance and every item should have equal chance to get selected. I believe you should redesign the way your application works.
One possible workaround is to have a couple of columns in a table and fill them with random numbers. When a user requests the list assign the random column to him (stick it in the URL for example). Then do an order by that column and display the results. Randomly switch 4-5 columns to create the appearance of randomness. Update the random numbers in the columns once a day.
Related
I have a MySQL table of records with several fields.
These records are shown and updated live in the browser. They are displayed in an order choosen by the user, with optional filters also choosen by the user.
Sometimes a change is made to one of the records, and it may affect the order for a given user.
In order to position the message correctly in the list, I need to find where its new offset falls after the change to the record. Basically, I need to get the "id" for the record that now comes before it in the MySQL results, so that I can use Javascript on the client side to reposition the record on the screen.
In raw SQL, I'd do something like this:
SET #rank=0;
SELECT rank
FROM
(SELECT #rank:=#rank+1 AS rank,
subQuery.id AS innerQuery,
FROM ...{rest of custom query here}... as subQuery)
AS outerQuery WHERE outerQuery.innerQuery={ID TO FIND};
Then I can just subtract 1 from the resulting rank, and find the ID of the question that comes before the record in question.
Is this kind of query possible with Laravel's query builder? Or is there a better strategy than what I've come up with here to accomplish the same task?
EDIT: There are a lot of records. So if possible I'd like to avoid loading all the records into memory to find the offset of the record. They are originally loaded on the screen in an "infinite scroll" type method, since it would be too much to load all of them at once.
This is a problem I have been thinking about for a long time but I haven't written any code yet because I first want to solve some general problems I am struggling with. This is the main one.
Background
A single page web application makes requests for data to some remote API (which is under our control). It then stores this data in a local cache and serves pages from there. Ideally, the app remains fully functional when offline, including the ability to create new objects.
Constraints
Assume a server side database of products containing +- 50000 products (50Mb)
Assume no db type, we interact with it via REST/GraphQL interface
Assume a single product record is < 1kB
Assume a max payload for a resultset of 256kB
Assume max 5MB storage on the client
Assume search result sets ranging between 0 ... 5000 items per search
Challenge
The challenge is to define a stateless but (network) efficient way fetch pages from a result set so that it is deterministic which results we will get.
Example
In traditional paging, when getting the next 100 results for some query using this url:
https://example.com/products?category=shoes&firstResult=100&pageSize=100
the search result may look like this:
{
"totalResults": 2458,
"firstResult": 100,
"pageSize": 100,
"results": [
{"some": "item"},
{"some": "other item"},
// 98 more ...
]
}
The problem with this is that there is no way, based on this information, to get exactly the objects that are on a certain page. Because by the time we request the next page, the result set may have changed (due to changes in the DB), influencing which items are part of the result set. Even a small change can have a big impact: one item removed from the DB, that happened to be on page 0 of the result set, will change what results we will get when requesting all subsequent pages.
Goal
I am looking for a mechanism to make the definition of the result set independent of future database changes, so if someone was looking for shoes and got a result set of 2458 items, he could actually fetch all pages of that result set reliably even if it got influenced by later changes in the DB (I plan to not really delete items, but set a removed flag on them, for this purpose)
Ideas so far
I have seen a solution where the result set included a "pages" property, which was an array with the first and last id of the items in that page. Assuming your IDs keep going up in number and you don't really delete items from the DB ever, the number of items between two IDs is constant. Meaning the app could get all items between those two IDs and always get the exact same items back. The problem with this solution is that it only works if the list is sorted in ID order... I need custom sorting options.
The only way I have come up with for now is to just send a list of all IDs in the result set... That way pages can be fetched by doing a SELECT * FROM products WHERE id IN (3,4,6,9,...)... but this feels rather inelegant...
Any way I am hoping it is not too broad or theoretical. I have a web-based DB, just no good idea on how to do paging with it. I am looking for answers that help me in a direction to learn, not full solutions.
Versioning DB is the answer for resultsets consistency.
Each record has primary id, modification counter (version number) and timestamp of modification/creation. Instead of modification of record r you add new record with same id, version number+1 and sysdate for modification.
In fetch response you add DB request_time (do not use client timestamp due to possibly difference in time between client/server). First page is served normally, but you return sysdate as request_time. Other pages are served differently: you add condition like modification_time <= request_time for each versioned table.
You can cache the result set of IDs on the server side when a query arrives for the first time and return a unique ID to the frontend. This unique ID corresponds to the result set for that query. So now the frontend can request something like next_page with the unique ID that it got the first time it made the query. You should still go ahead with your approach of changing DELETE operation to a removed operation because it would make sure that none of the entries from the result set it deleted. You can discard the result set of the query from the cache when the frontend reaches the end of the result set or you can set a time limit on the lifetime of the cache entry.
I am trying to create an application for work. The app will be used internally and should allow us to assign some barcode numbers to our product SKUs. I am using Visual Studio / Basic 2010 Express to build this as my very limited and beginners experience is with VS 2010 Express.
I'll give a bit of information about how I see this application working and then I'll get on with my actual question:
I see the app allowing us to create a new Product in the database by a user entering the SKU and description of the product and then the app will assign this product the next available base number for the barcode and from there the app will (if required) generate the correct EAN13 and GTIN14 barcodes and store them against that SKU.
As a company we have a large range of barcode numbers we can use and we have split this large range up so that the first 50,000 (for example) are for our EAN13 codes, the next 50K are for our GTIN14 codes for Inner Cartons and the remaining 50K are for Master Cartons.
So in order to achieve this I have my Product table which contains the fields 'SKU', 'Description' and 'BarcodeBase'. I have managed to set the BarcodeBase field as unique and I am attempting to use AutoIncrement(Seed & Step) to make sure that this assigns the product a base barcode (before I calculate the check digit) that falls within the EAN13 range as described above...
So finally my question is: Is there a way I can put an upper limit on AutoIncrement so that on the off chance, way way in the future, the base barcode number will not overflow into the next range?
I've been googling unsuccessfully for an answer and I am only coming across things which talk about the data type of the field having a limit. For example the upper limit of an Int32 type. Through my searches I have become vaguely aware of the 'Expression' property of the field and also the possibility of coding a partial class - but I don't know if that is the right direction to go in or if there is something much simpler that I am overlooking / have not found.
I would really appreciate any help!
Edit: As per GrandMasterFlush's comment - I have added a local database to my VS project. So I think I am using a SQL Server Compact 3.5 db.
Use a CHECK constraint, e.g.:
ALTER TABLE dbo.Product ADD CONSTRAINT ...
CHECK (BarcodeBase BETWEEN 1 AND 50000);
I suggest you do not make BarcodeBase an IDENTITY column in the Product table (IDENTITY is the feature that you are referring to as "autoincrement"). IDENTITY is really designed for surrogate key use only and isn't ideal for meaningful business data. You can't update an IDENTITY column, it isn't necessarily sequential, may have gaps in the number sequence and you also only get to use one IDENTITY column per table. Instead of using IDENTITY in the Product table you can generate the sequence elsewhere, for example by incrementing a single value stored in a single row table.
newparts_calc
if (([MonthToDateQuery].[G/L Account] = 4200 and [Query1].[G_L_Group] = 'NEW')) THEN ([Credit Amount]-[Debit Amount]) ELSE (0)
Data Item1
total([newparts_calc])
I need Data Item1 to return newparts_calc values only.
So for example in 1st row Data Item1 should be 8,540.8, but is 34,163.2
Whats wrong? how do i fix?
REVISED QUESTION
I apologize for not making sense on the original question.
I have many of the calc's that im trying to gather and put on a crosstab. I want to see sales by month (row) and part category (column)
[Query2] is the one shown in picture above.
It joins [MonthToDateQuery] AND [Query1]
The join is on 'Invoice' and carnality is 1..1 = 1..1
[MonthToDateQuery] is based on the package im working in. General ledger. It supplies the g/l entries for each sales g/l account
[Query1] is a SQL query i brought in to be able to break out categories even further from g/l group.
For example g/l account 4300 is rebuilt. However i needed to break out even further to see Rebuilt-Production and Rebuilt-New. I can do that with the g/l group.
I saw in my g/l account ledger entries that it referenced the invoice number. So thats how i tied in my SQL.
So as you can see from the table below (which is the view tabular data from query) i need a total. I have tried plugging newparts_calc into my crosstab and setting aggregation to total but the numbers still dont seem right. I dont think i have something set as it should be.
All the calc's im doing are based on single or multiple G/L Accounts and single or multiple G/L Groups.
Any Advice?
As you can see the problem seems to be duplicate invoice numbers.
How can i fix?
Couple things come to mind:
-Set the processing order to 2
-Since your calc is always a multiple and you are joining two queries, you may need to check your cardinality. Sometimes it helps to add derived queries to ensure you are working with the correct grain.
I'm obviously missing something, but if you want
I need Data Item1 to return newparts_calc values only.
just use newparts_calc, without total? That would give you proper value for row 1 -)
If you need a running-total for days (sum of values for previous days) — you should use a running_total function.
At a guess, one of your two queries is returning multiple rows for each invoice, which will cause this double counting. Look at the output of the two queries and see if that's happening. If so, then you just need to work out how to collapse that down to one row per invoice.
Per your new question - The underlying data has got to be causing the issue. Its clearly not 1:1 (note that even though this is what your stated cardinality is, Cognos does not enforce 1:1). Invoice number is not unique, GL Group is at a lower level.
Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?