I am looking to create a large list of items that allows for easy insertion of new items and for easily changing the position of items within that list. When updating the position of an item, I want to change as few fields as possible regarding the order of items.
After some research, I found that Jira's Lexorank algorithm fulfills all of these needs. Each story in Jira has a 'rank-field' containing a string which is built up of 3 parts: <bucket>|<rank>:<sub-rank>. (I don't know whether these parts have actual names, this is what I will call them for ease of reference)
Examples of valid rank-fields:
0|vmis7l:hl4
0|i000w8:
0|003fhy:zzzzzzzzzzzw68bj
When dragging a card above 0|vmis7l:hl4, the new card will receive rank 0|vmis7l:hl2, which means that only the rank-field for this new card needs to be updated while the entire list can always be sorted on this rank-field. This is rather clever, and I can't imagine that Lexorank is the only algorithm to use this.
Is there a name for this method of sorting used in the sub-rank?
My question is related to the creation of new cards in Jira. Each new card starts with an empty sub-rank, and the rank is always chosen such that the new card is located at the bottom of the list. I've created a bunch of new stories just to see how the rank would change, and it seems that the rank is always incremented by 8 (in base-36).
Does anyone know more specifically how the rank for new cards is generated? Why is it incremented by 8?
I can only imagine that after some time (270 million cards) there are no more ranks to generate, and the system needs to recalculate the rank-field of all cards to make room for additional ranks.
Are there other triggers that require recalculation of all rank-fields?
I suppose the bucket plays a role in this recalculation. I would like to know how?
We are talking about a special kind of indexing here. This is not sorting; it is just preparing items to end up in a certain order in case someone happens to sort them (by whatever sorting algorithm). I know that variants of this kind of indexing have been used in libraries for decades, maybe centuries, to ensure that books belonging together but lacking a common title end up next to each other in the shelves, but I have never heard of a name for it.
The 8 is probably chosen wisely as a compromise, maybe even by analyzing typical use cases. Consider this: If you choose a small increment, e. g. 1, then all tickets will have ranks like [a, b, c, …]. This will be great if you create a lot of tickets (up to 26) in the correct order because then your rank fields keep small (one letter). But as soon as you move a ticket between two other tickets, you will have to add a letter: [a, b] plus a new ticket between them: [a, an, b]. If you expect to have this a lot, you better leave gaps between the ranks: [a, i, q, …], then an additional ticket can get a single letter as well: [a, e, i, q, …]. But of course if you now create lots of tickets in the correct order right in the beginning, you quickly run out of letters: [a, i, q, y, z, za, zi, zq, …]. The 8 probably is a good value which allows for enough gaps between the tickets without increasing the need for many letters too soon. Keep in mind that other scenarios (maybe not Jira tickets which are created manually) might make other values more reasonable.
You are right, the rank fields get recalculated now and then, Lexorank calls this "balancing". Basically, balancing takes place in one of three occasions: ① The ranks are exhausted (largest value reached), ② the ranks are due to user-reranking of tickets too close together ([a, b, i] and something is supposed to go in between a and b), and ③ a balancing is triggered manually in the management page. (Actually, according to the presentation, Lexorank allows for up to three letter ranks, so "too close together" can be something like aaa and aab but the idea is the same.)
The <bucket> part of the rank is increased during balancing, so a messy [0|a, 0|an, 0|b] can become a nice and clean [1|a, 1|i, 1|q] again. The brownbag presentation about Lexorank (as linked by #dandoen in the comments) mentions a round-robin use of <buckets>, so instead of a constant increment (0→1→2→3→…) a 2 is increased modulo 3, so it will turn back to 0 after the 2 (0→1→2→0→…). When comparing the ranks, the sorting algorithm can consider a 0 "greater" than a 2 (it will not be purely lexicographical then, admitted). If now the balancing algorithm works backwards (reorder the last ticket first), this will keep the sorting order intact all the time. (This is just a side aspect, that's why I keep the explanation small, but if this is interesting, ask, and I will elaborate on this.)
Sidenote: Lexorank also keeps track of minimum and maximum values of the ranks. For the functioning of the algorithm itself, this is not necessary.
I would like a simple way to represent the order of a list of objects. When an object changes position in that list I would like to update just one record. I don't know if this can be done but I'm interested to ask the SO hive...
Wish-list constraints
the algorithm (or data structure) should allow for items to be repositioned in the list by updating the properties of a single item
the algorithm (or data structure) should require no housekeeping to maintain the integrity of the list
the algorithm (or data structure) should allow for the insertion of new items or the removal of existing items
Why I care about only updating one item at a time...
[UPDATED to clarify question]
The use-case for this algorithm is a web application with a CRUDy, resourceful server setup and a clean (Angular) client.
It's good practice to keep to the pure CRUD actions where possible and makes for cleaner code all round. If I can do this operation in a single resource#update request then I don't need any additional serverside code to handle the re-ordering and it can all be done using CRUD with no alterations.
If more than one item in the list needs to be updated for each move then I need a new action on my controller to handle it. It's not a showstopper but it starts spilling over into Angular and everything becomes less clean than it ideally should be.
Example
Let's say we have a magazine and the magazine has a number of pages :
Original magazine
- double page advert for Ford (page=1)
- article about Jeremy Clarkson (page=2)
- double page advert for Audi (page=3)
- article by James May (page=4)
- article by Richard Hammond (page=5)
- advert for Volkswagen (page=6)
Option 1: Store integer page numbers
... in which we update up to N records per move
If I want to pull Richard Hammond's page up from page 5 to page 2 I can do so by altering its page number. However I also have to alter all the pages which it then displaces:
Updated magazine
- double page advert for Ford (page=1)
- article by Richard Hammond (page=2)(old_value=5)*
- article about Jeremy Clarkson (page=3)(old_value=2)*
- double page advert for Audi (page=4)(old_value=3)*
- article by James May (page=5)(old_value=4)*
- advert for Volkswagen (page=6)
* properties updated
However I don't want to update lots of records
- it doesn't fit my architecture
Let's say this is being done using javascript drag-n-drop re-ordering via Angular.js. I would ideally like to just update a value on the page which has been moved and leave the other pages alone. I want to send an http request to the CRUD resource for Richard Hammond's page saying that it's now been moved to the second page.
- and it doesn't scale
It's not a problem for me yet but at some point I may have 10,000 pages. I'd rather not update 9,999 of them when I move a new page to the front page.
Option 2: a linked list
... in which we update 3 records per move
If instead of storing the page's position, I instead store the page that comes before it then I reduce the number of actions from a maximum of N to 3.
Original magazine
- double page advert for Ford (id = ford, page_before = nil)
- article about Jeremy Clarkson (id = clarkson, page_before = ford)
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- article by Richard Hammond (id = hamster, page_before = audi)
- advert for Volkswagen (id = vw, page_before = hamster)
again we move the cheeky hamster up...
Updated magazine
- double page advert for Ford (id = ford, page_before = nil)
- article by Richard Hammond (id = hamster, page_before = ford)*
- article about Jeremy Clarkson (id = clarkson, page_before = hamster)*
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- advert for volkswagen (id = vw, page_before = audi)*
* properties updated
This requires updating three rows in the database: the page we moved, the page just below its old position and the page just below its new position.
It's better but it still involves updating three records and doesn't give me the resourceful CRUD behaviour I'm looking for.
Option 3: Non-integer positioning
...in which we update only 1 record per move (but need to housekeep)
Remember though, I still want to update only one record for each repositioning. In my quest to do this I take a different approach. Instead of storing the page position as an integer I store it as a float. This allows me to move an item by slipping it between two others:
Original magazine
- double page advert for Ford (page=1.0)
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- article by Richard Hammond (page=5.0)
- advert for Volkswagen (page=6.0)
and then we move Hamster again:
Updated magazine
- double page advert for Ford (page=1.0)
- article by Richard Hammond (page=1.5)*
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- advert for Volkswagen (page=6.0)
* properties updated
Each time we move an item, we chose a value somewhere between the item above and below it (say by taking the average of the two items we're slipping between).
Eventually though you need to reset...
Whatever algorithm you use for inserting the pages into each other will eventually run out of decimal places since you have to keep using smaller numbers. As you move items more and more times you gradually move down the floating point chain and eventually need a new position which is smaller than anything available.
Every now and then you therefore have to do a reset to re-index the list and bring it all back within range. This is ok but I'm interested to see whether there is a way to encode the ordering which doesn't require this housekeeping.
Is there an algorithm which requires only 1 update and no housekeeping?
Does an algorithm (or perhaps more accurately, a data encoding) exist for this problem which requires only one update and no housekeeping? If so can you explain it in plain english how it works (i.g. no reference to directed graphs or vertices...)? Muchos gracias.
UPDATE (post points-awarding)
I've awarded the bounty on this to the question I feel had the most interesting answer. Nobody was able to offer a solution (since from the looks of things there isn't one) so I've not marked any particular question as correct.
Adjusting the no-housekeeping criterion
After having spent even more time thinking about this problem, it occurs to me that the housekeeping criterion should actually be adjusted. The real danger with housekeeping is not that it's a hassle to do but that it should ideally be robust to a client who has an outstanding copy of a pre-housekept set.
Let's say that Joe loads up a page containing a list (using Angular) and then goes off to make a cup of tea. Just after he downloads it the housekeeping happens and re-indexes all items (1000, 2000, 3000 etc).. After he comes back from his cup of tea, he moves an item from 1010 1011. There is a risk at this point that the re-indexing will place his item into a position it wasn't intended to go.
As a note for the future - any housekeeping algorithm should ideally be robust to items submitted across different housekept versions of the list too. Alternatively you should version the housekeeping and create an error if someone tries to update across versions.
Issues with the linked list
While the linked list requires only a few updates it's got some drawbacks too:
it's not trivial to deal with deletions from the list (and you may have to adjust your #destroy method accordingly
it's not easy to order the list for retrieval
The method I would choose
I think that having seen all the discussion, I think I would choose the non-integer (or string) positioning:
it's robust to inserts and deletions
it works of a single update
It does however need housekeeping and as mentioned above, if you're going to be complete you will also need to version each housekeeping and raise an error if someone tries to update based on a previous list version.
You should add one more sensible constraint to your wish-list:
max O(log N) space for each item (N being total number of items)
For example, the linked-list solution holds to this - you need at least N possible values for pointer, so the pointer takes up log N space. If you don't have this limit, trivial solution (growing strings) already mentioned by Lasse Karlsen and tmyklebu are solution to your problem, but the memory grows one character up (in the worst case) for each operation). You need some limit and this is a sensible one.
Then, hear the answer:
No, there is no such algorithm.
Well, this is a strong statement, and not easy to hear, so I guess proof is required :) I tried to figure out general proof, posted a question on Computer Science Theory, but the general proof is really hard to do. Say we make it easier and we will explicitly assume there are two classes of solutions:
absolute addressing - address of each item is specified by some absolute reference (integer, float, string)
relative addressing - address of each item is specified relatively to other items (e.g. the linked list, tree, etc.)
To disprove the existence of absolute addressing algorithm is easy. Just take 3 items, A, B, C, and keep moving the last one between the first two. You will soon run out of the possible combinations for the address of the moved element and will need more bits. You will break the constraint of the limited space.
Disproving the existence of relative addressing is also easy. For non-trivial arrangement, certainly some two different positions exist to which some other items are referring to. Then if you move some item between these two positions, at least two items have to be changed - the one which referred to the old position and the one which will refer to the new position. This violates the constraint of only one item changed.
Q.E.D.
Don't be fascinated by complexity - it doesn't work
Now that we (and you) can admit your desired solution does not exist, why would you complicate your life with complex solution that do not work? They can't work, as we proved above. I think we got lost here. Guys here spent immense effort just to end up with overly complicated solutions that are even worse than the simplest solution proposed:
Gene's rational numbers - they grow 4-6 bits in his example, instead of just 1 bit which is required by the most trivial algorithm (described below). 9/14 has 4 + 4 = 8 bits, 19/21 has 5 + 5 = 10 bits, and the resultant number 65/84 has 7 + 7 = 14 bits!! And if we just look at those numbers, we see that 10/14 or 2/3 are much better solutions. It can be easily proven that the growing string solution is unbeatable, see below.
mhelvens' solution - in the worst case he will add a new correcting item after each operation. This will for sure occupy much more than one bit more.
These guys are very clever but obviously cannot bring something sensible. Someone has to tell them - STOP, there's no solution, and what you do simply can't be better than the most trivial solution you are afraid to offer :-)
Go back to square one, go simple
Now, go back to the list of your restrictions. One of them must be broken, you know that. Go through the list and ask, which one of these is least painful?
1) Violate memory constraint
This is hard to violate infinitely, because you have limited space... so be prepared to also violate the housekeeping constraint from time to time.
The solution to this is the solution already proposed by tmyklebu and mentioned by Lasse Karlsen - growing strings. Just consider binary strings of 0 and 1. You have items A, B and C and moving C between A and B. If there is no space between A and B, i.e. they look
A xxx0
B xxx1
Then just add one more bit for C:
A xxx0
C xxx01
B xxx1
In worst case, you need 1 bit after every operation. You can also work on bytes, not bits. Then in the worst case, you will have to add one byte for every 8 operations. It's all the same. And, it can be easily seen that this solution cannot be beaten. You must add at least one bit, and you cannot add less. In other words, no matter how the solution is complex, it can't be better than this.
Pros:
you have one update per item
can compare any two elements, but slow
Cons:
comparing or sorting will get very very slow as the strings grow
there will be a housekeeping
2) Violate one item modified constraint
This leads to the original linked-list solution. Also, there are plenty of balanced tree data structures, which are even better if you need to look up or compare items (which you didn't mention).
These can go with 3 items modified, balanced trees sometimes need more (when balance operations are needed), but as it is amortized O(1), in a long row of operations the number of modifications per operation is constant. In your case, I would use tree solution only if you need to look up or compare items. Otherwise, the linked-list solution rocks. Throwing it out just because they need 3 operations instead of 1? C'mon :)
Pros:
optimal memory use
fast generation of ordered list (one linear pass), no need to sort
fast operations
no housekeeping
Cons:
cannot easily compare two items. Can easily generate the order of all the items, but given two items randomly, comparing them will take O(N) for list and O(log N) for balanced trees.
3 modified items instead of 1 (... letting up to you how much of a "con" this is)
3) Violate "no housekeeping" constraint
These are the solution with integers and floats, best described by Lasse Karlsen here. Also, the solutions from point 1) will fall here :). The key question was already mentioned by Lasse:
How often will housekeeping have to take place?
If you will use k-bit integers, then from the optimal state, when items are spread evenly in the integer space, the housekeeping will have to take place every k - log N operations, in the worst-case. You might then use more ore less sophisticated algorithms to restrict the number of items you "housekeep".
Pros:
optimal memory use
fast operation
can compare any two elements
one item modified per operation
Cons:
housekeeping
Conclusion - hope never dies
I think the best way, and the answers here prove that, is to decide which one of those constraints is least pain and just take one of those simple solutions formerly frowned upon.
But, hope never dies. When writing this, I realized that there would be your desired solution, if we just were able to ask the server!! Depends on the type of the server of course, but the classical SQL server already has the trees/linked-list implemented - for indices. The server is already doing the operations like "move this item before this one in the tree"!! But the server is doing based on the data, not based on our request. If we were able somehow to ask server to do this without the need to create perverse, endlessly growing data, that would be your desired solution! As I said, the server already does it - the solution is sooo close, but so far. If you can write your own server, you can do it :-)
#tmyklebu has the answer, but he never quite got to the punch line: The answer to your question is "no" unless you are willing to accept a worst case key length of n-1 bits to store n items.
This means that total key storage for n items is O(n^2).
There is an "adversary" information-theoretic argument that says no matter what scheme for assigning keys you choose for a database of n items, I can always come up with a series of n item re-positionings ("Move item k to position p.") that will force you to use a key with n-1 bits. Or by extension, if we start with an empty database, and you give me items to insert, I can choose a sequence of insertion positions that will require you to use at least zero bits for the first, one for the second, etc. indefinitely.
Edit
I earlier had an idea here about using rational numbers for keys. But it was more expensive than just adding one bit of length to split the gap between pairs of keys that differ by one. So I've removed it.
You can also interpret option 3 as storing positions as an unbounded-length string. That way you don't "run out of decimal places" or anything of that nature. Give the first item, say 'foo', position 1. Recursively partition your universe into "the stuff that's less than foo", which get a 0 prefix, and "the stuff that's bigger than foo", which get a 1 prefix.
This sucks in a lot of ways, notably that the position of an object can need as many bits to represent as you've done object moves.
I was fascinated by this question, so I started working on an idea. Unfortunately, it's complicated (you probably knew it would be) and I don't have time to work it all out. I just thought I'd share my progress.
It's based on a doubly-linked list, but with extra bookkeeping information in every moved item. With some clever tricks, I suspect that each of the n items in the set will require less than O(n) extra space, even in the worst case, but I have no proof of this. It will also take extra time to figure out the view order.
For example, take the following initial configuration:
A (-,B|0)
B (A,C|0)
C (B,D|0)
D (C,E|0)
E (D,-|0)
The top-to-bottom ordering is derived purely from the meta-data, which consists of a sequence of states (predecessor,successor|timestamp) for each item.
When moving D between A and B, you push a new state (A,B|1) to the front of its sequence with a fresh timestamp, which you get by incrementing a shared counter:
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
As you see, we keep the old information around in order to connect C to E.
Here is roughly how you derive the proper order from the meta-data:
You keep a pointer to A.
A agrees it has no predecessor. So insert A. It leads you to B.
B agrees it wants to be successor to A. So insert B after A. It leads you to C.
C agrees it wants to be successor to B. So insert C after B. It leads you to D.
D disagrees. It wants to be successor to A. Start recursion to insert it and find the real successor:
D wins from B because it has a more recent timestamp. Insert D after A. It leads you to B.
B is already D's successor. Look back in D's history, which leads you to E.
E agrees it wants to be successor to D with timestamp 0. So return E.
So the successor is E. Insert E after C. It tells you it has no successor. You are finished.
This is not exactly an algorithm yet, because it doesn't cover all cases. For example, when you move an item forwards instead of backwards. When moving B between D and E:
A (-,B|0)
C (B,D|0)
D (C,E|0)
B (D,E|1)(A,C|0)
E (D,-|0)
The 'move' operation is the same. But the algorithm to derive the proper order is a bit different. From A it will run into B, able to get the real successor C from it, but with no place to insert B itself yet. You can keep it in reserve as a candidate for insertion after D, where it will eventually match timestamps against E for the privilege of that position.
I wrote some Angular.js code on Plunker that can be used as a starting-point to implement and test this algorithm. The relevant function is called findNext. It doesn't do anything clever yet.
There are optimizations to reduce the amount of metadata. For example, when moving an item away from where it was recently placed, and its neighbors are still linked of their own accord, you won't have to preserve its newest state but can just replace it. And there are probably situations where you can discard all of an item's sufficiently old states (when you move it).
It's a shame I don't have time to fully work this out. It's an interesting problem.
Good luck!
Edit: I felt I needed to clarify the above-mentioned optimization ideas. First, there is no need to push a new history configuration if the original links still hold. For example, it is fine to go from here (moved D between A and B):
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
to here (then moved D between B and C):
A (-,B|0)
B (A,C|0)
D (B,C|2) (C,E|0)
C (B,D|0)
E (D,-|0)
We are able to discard the (A,B|1) configuration because A and B were still connected by themselves. Any number of 'unrelated' movements can come inbetween without changing that.
Secondly, imagine that eventually C and E are moved away from each other, so the (C,E|0) configuration can be dropped the next time D is moved. This is trickier to prove, though.
All of this considered, I believe there is a good chance that the list requires less than O(n+k) space (n being the number of items in the list, k being the number of operations) in the worst case; especially in the average case.
The way to prove any of this is to come up with a simpler model for this data-structure, most likely based on graph theory. Again, I regret that I don't have time to work on this.
Your best option is "Option 3", although "non-integer" doesn't necessarily have to be involved.
"Non-integer" can mean anything that have some kind of accuracy definition, which means:
Integers (you just don't use 1, 2, 3, etc.)
Strings (you just tuck on more characters to ensure the proper "sort order")
Floating point values (adding more decimal points, somewhat the same as strings)
In each case you're going to have accuracy problems. For floating point types, there might be a hard limit in the database engine, but for strings, the limit will be the amount of space you allow for this. Please note that your question can be understood to mean "with no limits", meaning that for such a solution to work, you really need infinite accuracy/space for the keys.
However, I think that you don't need that.
Let's assume that you initially allocate every 1000th index to each row, meaning you will have:
1000 A
2000 B
3000 C
4000 D
... and so on
Then you move as follows:
D up between A and B (gets index 1500)
C up between A and D (gets index 1250)
B up between A and C (gets index 1125)
D up between A and B (gets index 1062)
C up between A and D (gets index 1031)
B up between A and C (gets index 1015)
D up between A and B (gets index 1007)
C up between A and D (gets index 1004)
B up between A and C (gets index 1002)
D up between A and B (gets index 1001)
At this point, the list looks like this:
1000 A
1001 D
1002 B
1004 C
Now, then you want to move C up between A and D.
This is currently not possible, so you're going to have to renumber some items.
You can get by by updating B to have number 1003, trying to update the minimum number of rows, and thus you get:
1000 A
1001 C
1002 D
1003 B
but now, if you want to move B up between A and C, you're going to renumber everything except A.
The question is this: How likely is it that you have this pathological sequence of events?
If the answer is very likely then you will have problems, regardless of what you do.
If the answer is likely seldom, then you might decide that the "problems" with the above approach are manageable. Note that renumbering and ordering more than one row will likely be the exceptions here, and you would get something like "amortized 1 row updated per move". Amortized means that you spread the cost of those occasions where you have to update more than one row out over all the other occasions where you don't.
What if you store the original order and don't change it after saving it once and then store the number of increments up the list or down the list?
Then by moving something up 3 levels you would store this action only.
in the database you can then order by a mathematically counted column.
First time insert:
ord1 | ord2 | value
-----+------+--------
1 | 0 | A
2 | 0 | B
3 | 0 | C
4 | 0 | D
5 | 0 | E
6 | 0 | F
Update order, move D up 2 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
3 | 0 | C | 3
4 | -2 | D | 2
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
4 | -2 | D | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2 ASC, ord2 ASC
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Move E up 4 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
5 | -4 | E | 1
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
6 | 0 | F | 6
Something like relative ordering, where ord1 is the absolute order while ord2 is the relative order.
Along with the same idea of just storing the history of movements and sorting based on that.
Not tested, not tried, just wrote down what I thought at this moment, maybe it can point you in some direction :)
I am unsure if you will call this cheating, but why not create a separate page list resource that references the page resources?
If you change the order of the pages you need not update any of the pages, just the list that stores the order if the IDs.
Original page list
[ford, clarkson, captain_slow, audi, hamster, vw]
Update to
[ford, hamster, clarkson, captain_slow, audi, vw]
Leave the page resources untouched.
You could always store the ordering permutation separately as a ln(num_records!)/ln(2) bit bitstring and figure out how to transform/CRUD that yourself so that you'd only need to update a single bit for simple operations, if updating 2/3 records is not good enough for you.
What about the following very simple algorithm:
(let's take the analogy with page numbers in a book)
If you move a page to become the "new" page 3, you now have "at least" one page 3, possibly two, or even more. So, which one is the "right" page 3?
Solution: the "newest". So, we make use of the fact that a record also has an "updated date/time", to determine who the real page 3 is.
If you need to represent the entire list in its right order, you have to sort with two keys, one for the page number, and one for the "updated date/time" field.
I'm developing a reservation module for buses and I have trouble designing the right database structure for it.
Let's take following case:
Buses go from A to D with stopovers at B and C. A Passenger can reserve ticket for any route, ie. from A to B, C to D, A to D, etc.
So each route can have many "subroutes", and bigger contain smaller ones.
I want to design a table structure for routes and stops in a way that would help easily search for free seats. So if someone reserves seat from A to B, then seats from B to C or D would be still be available.
All ideas would be appreciated.
I'd probably go with a "brute force" structure similar to this basic idea:
(There are many more fields that should exist in the real model. This is only a simplified version containing the bare essentials necessary to establish relationships between tables.)
The ticket "covers" stops through TICKET_STOP table, For example, if a ticket covers 3 stops, then TICKET_STOP will contain 3 rows related to that ticket. If there are 2 other stops not covered by that ticket, then there will be no related rows there, but there is nothing preventing a different ticket from covering these stops.
Liberal usage or natural keys / identifying relationships ensures two tickets cannot cover the same seat/stop combination. Look at how LINE.LINE_ID "migrates" alongside both edges of the diamond-shaped dependency, only to be merged at its bottom, in the TICKET_STOP table.
This model, by itself, won't protect you from anomalies such as a single ticket "skipping" some stops - you'll have to enforce some rules through the application logic. But, it should allow for a fairly simple and fast determination of which seats are free for which parts of the trip, something like this:
SELECT *
FROM
STOP CROSS JOIN SEAT
WHERE
STOP.LINE_ID = :line_id
AND SEAT.BUS_NO = :bus_no
AND NOT EXIST (
SELECT *
FROM TICKET_STOP
WHERE
TICKET_STOP.LINE_ID = :line_id
AND TICKET_STOP.BUS_ID = :bus_no
AND TICKET_STOP.TRIP_NO = :trip_no
AND TICKET_STOP.SEAT_NO = SEAT.SEAT_NO
AND TICKET_STOP.STOP_NO = STOP.STOP_NO
)
(Replace the parameter prefix : with what is appropriate for your DBMS.)
This query essentially generates all combinations of stops and seats for given line and bus, then discards those that are already "covered" by some ticket on the given trip. Those combinations that remain "uncovered" are free for that trip.
You can easily add: STOP.STOP_NO IN ( ... ) or SEAT.SEAT_NO IN ( ... ) to the WHERE clause to restrict the search on specific stops or seats.
From the perspective of bus company:
Usually one route is considered as series of sections, like A to B, B to C, C to D, etc. The fill is calculated on each of those sections separately. So if the bus leaves from A full, and people leave at C, then user can buy ticket at C.
We calculate it this way, that each route has ID, and each section belongs to this route ID. Then if user buys ticket for more than one section, then each section is marked. Then for the next passenger system checks if all sections along the way are available.
I'm looking for a best way to represent a set of structural data.
I'm designing a product picker. It will ask user some questions to narrow down to the set of products.
i.e.
1st question: "What's the product Group?"
Answer: Group1
In Group1, available Product Categories are (pick one):
Category1
Category2
Category4
Answer: Category4
In Category4 for Group1, available Types are:
Type3
Type5
Answer: Type5
For Type5, in Category4, in Group1 available Product Chacteristics are... etc.
So each new question shows list based not only on the previous answer, but on all the answers before. (i.e. some Types available in category4 would be different if that Category4 was in Group2). It's like a tree, where each child could be under multiple parents.
There may be up to 10 such levels.
What's the most efficient structure to store this hierarchy?
Without any extra knowledge of the problem and the different distributions, here is what you should do:
Each node will have an n-dimensional array of bits stored in it, where n is its level (Groups are level 0). Then, when you reach level i, you will look over all nodes in that level, and for each one see if the bit that fits the current history is on or off. (There are no pointers or such between the nodes, nodes are just a convenient name I'm using).
The dimensions of the arrays in each level would be the total size of the previous levels, e.g. in Types level (level 2), you would have 2-dimensional arrays, with the dimensions (# Groups)*(# Categories).
Example:
To know whether or not Type5 should appear in Category4, Group1, you would go to its array in the cell [1][4], and if it is on (1) then it should appear, otherwise (0) it shouldn't.
If you are using a language that allows pointer arithmetics (like c/c++), you can slightly optimize the matrix access by maintaining the offset you need to go to, since it always starts the same: [1], [1][4], [1][4][5], ..., but this should come at a much later time, when everything already works properly.
If later on you get to know more details about your problem, such as that most of these connections do or don't exist, then you could think about using sparse matrices, for example, instead of regular ones.