Need help to decide a game strategy - algorithm

I'd like to know what strategy should be used to solve the following problem.
Problem Statement
There are 2 coal mines, each employing a group of miners. Our job is to send food shipments to the mines. Every time a shipment of food arrives at their mine, the miners produce some amount of coal. There are three types of food shipments: meat, fish and bread.
Every time a new shipment arrives to their mine, they will consider the new shipment and the previous two shipments (or fewer if there haven't been that many) and then:
If all shipments are of the same type, they will produce one unit of coal
If there are two different types of food among the shipments, they will produce two units of coal.
If there are three different types of food, they will produce three units of coal.
The types of food shipments and the order in which they will be sent is known beforehand.
Input
You are given the types of food shipments, in the order in which they are to be sent.
Goal
The goal is to maximize the coal output. This is done by determining which shipment should go to which mine. The 2 mines don't necessarily have to receive the same number of shipments (in fact, it is permitted to send all shipments to one mine).
Example
For the shipment order: MBMFFB, the expected output (maximum possible coal output) is 12.

The logic you use is wrong:
M -> Mine 1 = 1 coal unit(s)
B -> Mine 1 = 2 "
M -> Mine 2 = 1 "
F -> Mine 1 = 3 "
F -> Mine 2 = 2 "
B -> Mine 2 = 3 "
Since for the first day, Mine 1 only had 1 type of food.
I can see a simple dynamic programming algorithm, but I'll leave that to you.
A simple hint: for each shipment, you can send it to either mine 1 or 2; after sending it, what matters is just:
The amount of mine that has been mined;
The previous 3 shipments.
So there are at most (3 ^ 3) ^ 2 = 729 shipment configurations, and for each of these an optimal amount of coal. In each step compute these configurations, and in the end you will get the answer.

Related

How can a customer spend least money to buy clothes in a sales promotion(algorithm)

There is a sales promotion in a clothing shop.Every cloth has a price and a free condition.The free condition means that if your order reach this price this cloth will be free.A customer want to bug some clothes,how can he spend the least money?He can bug those with a lot of orders.
input:
m //this is how many clothes category the shop have
//follow m lines input clothes and it's price
...
//follow m lines input clothes and it's free condition
...
n //how many clothes does the customer want to bug
//follow m items are clothes which the customer want to bug
eg:
input:
3
//these three lines means if your order >= 300 you can get A freely
//if >=400 you can get B...,the order should not include present
A 300
B 400
C 500
//these three lines are ABC's price
A 300
B 400
C 500
3
A B C
output:
700
A + B - > C
//every order can get only one present
//A+B 's order is 700 ,so he can get C freely,save 500,this is the greatest method,
//if your order is B+C get A freely,save 300,so it is not the greatest,not true
input:
3
A 300
B 400
C 500
A 300
B 400
C 500
4
A A B C
output:
800
A - > A //A 's order is 300 ,so he can get A freely
C - > B //C 's order is 500 ,so he can get B freely
This is my question,I can't solve it on online judge,so I have to ask it here.
I think that the complexity of this problem depends on exactly how you are allowed to buy the goods you want. I think it is easiest if you can present a set of goods that you will pay for, and then take free anything you want that is marked with a free threshold which is less than or equal to the amount of money you have handed over.
In this case there is an algorithm similar to the pseudo-polynomial knapsack algorithm which computes, for each possible total price P, the best way of selecting a set of goods to buy to add up to P. That is, the way of selecting a set of goods to buy that minimises the maximum free threshold price of any good not in that set that you want to buy. You need a table that gives you the value of that threshold for each possible P.
Consider the goods one by one, whether you want them or not, and at each stage build a table of answers based on the table of answers at the previous stage. Take the previous table and consider what happens if you buy or don't buy the current good and add it or not to the best answer for a given price in the previous table. If you buy it you have a possible best answer for an increased price. If you don't buy it you may increase the maximum free threshold associated with that best answer, if the free threshold for the current good is greater than the maximum free threshold for a good not bought associated with the answer you are considering, and if you actually want the current good.
Once you have a final table of best answers for each P you can backtrack to find out what set of goods make up that P. Then look to see if there is any good that you want not in that set whose free threshold is greater than P. If not, then this is a possible answer, and you clearly want the one associated with the smallest value of P.

Apriori Algorithm- frequent item set generation

I am using Apriori algorithm to identify the frequent item sets of the customer.Based on the identified frequent item sets I want to prompt suggest items to customer when customer adds a new item to his shopping list, As the frequent item sets I got the result as follows;
[1],[3],[2],[5]
[2.3],[3,5],[1,3],[2,5]
[2,3,5]
My problem is if I consider only [2,3,5] set to make suggestions to customer am I wrong? i.e If customer adds item 3 to his shopping list I would recommend item 2 and item 5. If customer adds item 1 to the shopping list no suggestions will be made since I am considering only set [2,3,5] and item 1 is not available in that set. I want to know whether my logic (considering only set [2,3,5]) is enough to make suggestions for the user
You should base on how the frequency of the item set is relative to its sub item sets to figure out the rule. For example
if frequency of (2,3,5) is close to the frequency of (3,5), the rule will be (3,5) -> 2
If frequency of (2,3,5) is close to the frequency of (3), the rule will be 3 -> (2,5)
If frequency of (2,3) is close to the frequency of (2), the rule will be 2 -> 3
That means not only largest frequent item set could be used to make rule but its sub frequent item sets also. And the rule will be more pricise if you could consider how close frequency of item sets is relative to others.
No. Deriving recommendation rules requires more effort.
Just because [2,3,5] is frequent does not mean 2 -> 3,5 is a good rule.
Consider the case that 2 is a very popular product, but 3,5 are just barely frequent. Consider a gas station. [gas, coffee, bagel] is probably a frequent itemset, but rather few customers who buy gas will also buy coffee and a bagel (low confidence).
You do want to consider rules such as 2,3 -> 5 because they may have higher confidence. I.e. if the customer buys gas and coffee, suggest a bagel.
Frequency is not sufficient for recommendations! Consider 2 and 3 are bought in 80% of cases. 2, 3, 5 is bought in 60% of cases. Naively, in 6 out of 8 times, the customer will also buy 5, that's 75% correct! But this does not mean 5 is a good recommendation! Because 5 could be in 80% total, so if he bought 2 and 3, he is actually 5% less likely to buy 5, and we have a negative correlation here. That's why you need to look at lift, too. Or other measures like it, there are many.

Sorting People into Groups based on Votes

I have a problem with finding a algorithm for sorting a dataset of people. I try to explain as detailed as possible:
The story starts with a survey. A bunch of people, lets say 600 can choose between 20-25 projects. They make a #1-wish, #2-wish and #3-wish, where #1 is the most wanted project they want to take part and wish 3 the "not-perfect-but-most-acceptable-choose".
These project are limited in their number of participants. Every project can join around 30 people (based on the number of people and count of projects).
The algorithm puts the people in the different projects and should find the best possible combination.
The problem is that you can't just put all the people with their number 1 wish X in the certain project and stuff all the other with also number 1 wish X in there number 2 wish because that would not be the most "happiest" situation for everybody.
You may can think of what I mean when you imagine that for everybody who get his number 1 wish you get 100 points, for everybody who get his number 2 wish 60 points, number 3 wish 30 points and who get not in one of his wishes 0 points. And you want to get as most points as possible.
I hope you get my problem. This is for a school-project day.
Is there something out there that could help me? Do you have any idea? I would be thankful for every tipp!!
Kind regards
You can solve this optimally by formulating it as a min cost network flow problem.
Add a node for each person, and one for each project.
Set cost for a flow between a person and a project according to their preferences.
(As Networkx provides a min cost flow, but not max cost flow I have set the costs to be
negative.)
For example, using Networkx and Python:
import networkx as nx
G=nx.DiGraph()
prefs={'Tom':['Project1','Project2','Project3'],
'Dick':['Project2','Project1','Project3'],
'Harry':['Project1','Project3','Project1']}
capacities={'Project1':2,'Project2':10,'Project3':4}
num_persons=len(prefs)
G.add_node('dest',demand=num_persons)
A=[]
for person,projectlist in prefs.items():
G.add_node(person,demand=-1)
for i,project in enumerate(projectlist):
if i==0:
cost=-100 # happy to assign first choices
elif i==1:
cost=-60 # slightly unhappy to assign second choices
else:
cost=-30 # very unhappy to assign third choices
G.add_edge(person,project,capacity=1,weight=cost) # Edge taken if person does this project
for project,c in capacities.items():
G.add_edge(project,'dest',capacity=c,weight=0)
flowdict = nx.min_cost_flow(G)
for person in prefs:
for project,flow in flowdict[person].items():
if flow:
print person,'joins',project
In this code Tom's number 1 choice is Project1, followed by Project2, then Project3.
The capacities dictionary specifies the upper limit on how many people can join each project.
My algorithm would be something like this:
mainloop
wishlevel = 1
loop
Distribute people into all projects according to wishlevel wish
loop through projects, counting population
If population exceeds maximum
Distribute excess non-redistributed people into their wishlevel+1 projects that are under-populated
tag distributed people as 'redistributed' to avoid moving again
endif
endloop
wishlevel = wishlevel + 1
loop until wishlevel == 3
mainloop until no project exceeds max population
This should make several passes through the data set until everything is evened out. This algorithm may result in an endless loop if you restrict redistribution of already-redistributed people in the event that one project fills up with such people as the algorithm progresses, so you might try it without that restriction.

What is a good, CRUD-sympathetic algorithm for ordering list items?

I would like a simple way to represent the order of a list of objects. When an object changes position in that list I would like to update just one record. I don't know if this can be done but I'm interested to ask the SO hive...
Wish-list constraints
the algorithm (or data structure) should allow for items to be repositioned in the list by updating the properties of a single item
the algorithm (or data structure) should require no housekeeping to maintain the integrity of the list
the algorithm (or data structure) should allow for the insertion of new items or the removal of existing items
Why I care about only updating one item at a time...
[UPDATED to clarify question]
The use-case for this algorithm is a web application with a CRUDy, resourceful server setup and a clean (Angular) client.
It's good practice to keep to the pure CRUD actions where possible and makes for cleaner code all round. If I can do this operation in a single resource#update request then I don't need any additional serverside code to handle the re-ordering and it can all be done using CRUD with no alterations.
If more than one item in the list needs to be updated for each move then I need a new action on my controller to handle it. It's not a showstopper but it starts spilling over into Angular and everything becomes less clean than it ideally should be.
Example
Let's say we have a magazine and the magazine has a number of pages :
Original magazine
- double page advert for Ford (page=1)
- article about Jeremy Clarkson (page=2)
- double page advert for Audi (page=3)
- article by James May (page=4)
- article by Richard Hammond (page=5)
- advert for Volkswagen (page=6)
Option 1: Store integer page numbers
... in which we update up to N records per move
If I want to pull Richard Hammond's page up from page 5 to page 2 I can do so by altering its page number. However I also have to alter all the pages which it then displaces:
Updated magazine
- double page advert for Ford (page=1)
- article by Richard Hammond (page=2)(old_value=5)*
- article about Jeremy Clarkson (page=3)(old_value=2)*
- double page advert for Audi (page=4)(old_value=3)*
- article by James May (page=5)(old_value=4)*
- advert for Volkswagen (page=6)
* properties updated
However I don't want to update lots of records
- it doesn't fit my architecture
Let's say this is being done using javascript drag-n-drop re-ordering via Angular.js. I would ideally like to just update a value on the page which has been moved and leave the other pages alone. I want to send an http request to the CRUD resource for Richard Hammond's page saying that it's now been moved to the second page.
- and it doesn't scale
It's not a problem for me yet but at some point I may have 10,000 pages. I'd rather not update 9,999 of them when I move a new page to the front page.
Option 2: a linked list
... in which we update 3 records per move
If instead of storing the page's position, I instead store the page that comes before it then I reduce the number of actions from a maximum of N to 3.
Original magazine
- double page advert for Ford (id = ford, page_before = nil)
- article about Jeremy Clarkson (id = clarkson, page_before = ford)
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- article by Richard Hammond (id = hamster, page_before = audi)
- advert for Volkswagen (id = vw, page_before = hamster)
again we move the cheeky hamster up...
Updated magazine
- double page advert for Ford (id = ford, page_before = nil)
- article by Richard Hammond (id = hamster, page_before = ford)*
- article about Jeremy Clarkson (id = clarkson, page_before = hamster)*
- article by James May (id = captain_slow, page_before = clarkson)
- double page advert for Audi (id = audi, page_before = captain_slow)
- advert for volkswagen (id = vw, page_before = audi)*
* properties updated
This requires updating three rows in the database: the page we moved, the page just below its old position and the page just below its new position.
It's better but it still involves updating three records and doesn't give me the resourceful CRUD behaviour I'm looking for.
Option 3: Non-integer positioning
...in which we update only 1 record per move (but need to housekeep)
Remember though, I still want to update only one record for each repositioning. In my quest to do this I take a different approach. Instead of storing the page position as an integer I store it as a float. This allows me to move an item by slipping it between two others:
Original magazine
- double page advert for Ford (page=1.0)
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- article by Richard Hammond (page=5.0)
- advert for Volkswagen (page=6.0)
and then we move Hamster again:
Updated magazine
- double page advert for Ford (page=1.0)
- article by Richard Hammond (page=1.5)*
- article about Jeremy Clarkson (page=2.0)
- double page advert for Audi (page=3.0)
- article by James May (page=4.0)
- advert for Volkswagen (page=6.0)
* properties updated
Each time we move an item, we chose a value somewhere between the item above and below it (say by taking the average of the two items we're slipping between).
Eventually though you need to reset...
Whatever algorithm you use for inserting the pages into each other will eventually run out of decimal places since you have to keep using smaller numbers. As you move items more and more times you gradually move down the floating point chain and eventually need a new position which is smaller than anything available.
Every now and then you therefore have to do a reset to re-index the list and bring it all back within range. This is ok but I'm interested to see whether there is a way to encode the ordering which doesn't require this housekeeping.
Is there an algorithm which requires only 1 update and no housekeeping?
Does an algorithm (or perhaps more accurately, a data encoding) exist for this problem which requires only one update and no housekeeping? If so can you explain it in plain english how it works (i.g. no reference to directed graphs or vertices...)? Muchos gracias.
UPDATE (post points-awarding)
I've awarded the bounty on this to the question I feel had the most interesting answer. Nobody was able to offer a solution (since from the looks of things there isn't one) so I've not marked any particular question as correct.
Adjusting the no-housekeeping criterion
After having spent even more time thinking about this problem, it occurs to me that the housekeeping criterion should actually be adjusted. The real danger with housekeeping is not that it's a hassle to do but that it should ideally be robust to a client who has an outstanding copy of a pre-housekept set.
Let's say that Joe loads up a page containing a list (using Angular) and then goes off to make a cup of tea. Just after he downloads it the housekeeping happens and re-indexes all items (1000, 2000, 3000 etc).. After he comes back from his cup of tea, he moves an item from 1010 1011. There is a risk at this point that the re-indexing will place his item into a position it wasn't intended to go.
As a note for the future - any housekeeping algorithm should ideally be robust to items submitted across different housekept versions of the list too. Alternatively you should version the housekeeping and create an error if someone tries to update across versions.
Issues with the linked list
While the linked list requires only a few updates it's got some drawbacks too:
it's not trivial to deal with deletions from the list (and you may have to adjust your #destroy method accordingly
it's not easy to order the list for retrieval
The method I would choose
I think that having seen all the discussion, I think I would choose the non-integer (or string) positioning:
it's robust to inserts and deletions
it works of a single update
It does however need housekeeping and as mentioned above, if you're going to be complete you will also need to version each housekeeping and raise an error if someone tries to update based on a previous list version.
You should add one more sensible constraint to your wish-list:
max O(log N) space for each item (N being total number of items)
For example, the linked-list solution holds to this - you need at least N possible values for pointer, so the pointer takes up log N space. If you don't have this limit, trivial solution (growing strings) already mentioned by Lasse Karlsen and tmyklebu are solution to your problem, but the memory grows one character up (in the worst case) for each operation). You need some limit and this is a sensible one.
Then, hear the answer:
No, there is no such algorithm.
Well, this is a strong statement, and not easy to hear, so I guess proof is required :) I tried to figure out general proof, posted a question on Computer Science Theory, but the general proof is really hard to do. Say we make it easier and we will explicitly assume there are two classes of solutions:
absolute addressing - address of each item is specified by some absolute reference (integer, float, string)
relative addressing - address of each item is specified relatively to other items (e.g. the linked list, tree, etc.)
To disprove the existence of absolute addressing algorithm is easy. Just take 3 items, A, B, C, and keep moving the last one between the first two. You will soon run out of the possible combinations for the address of the moved element and will need more bits. You will break the constraint of the limited space.
Disproving the existence of relative addressing is also easy. For non-trivial arrangement, certainly some two different positions exist to which some other items are referring to. Then if you move some item between these two positions, at least two items have to be changed - the one which referred to the old position and the one which will refer to the new position. This violates the constraint of only one item changed.
Q.E.D.
Don't be fascinated by complexity - it doesn't work
Now that we (and you) can admit your desired solution does not exist, why would you complicate your life with complex solution that do not work? They can't work, as we proved above. I think we got lost here. Guys here spent immense effort just to end up with overly complicated solutions that are even worse than the simplest solution proposed:
Gene's rational numbers - they grow 4-6 bits in his example, instead of just 1 bit which is required by the most trivial algorithm (described below). 9/14 has 4 + 4 = 8 bits, 19/21 has 5 + 5 = 10 bits, and the resultant number 65/84 has 7 + 7 = 14 bits!! And if we just look at those numbers, we see that 10/14 or 2/3 are much better solutions. It can be easily proven that the growing string solution is unbeatable, see below.
mhelvens' solution - in the worst case he will add a new correcting item after each operation. This will for sure occupy much more than one bit more.
These guys are very clever but obviously cannot bring something sensible. Someone has to tell them - STOP, there's no solution, and what you do simply can't be better than the most trivial solution you are afraid to offer :-)
Go back to square one, go simple
Now, go back to the list of your restrictions. One of them must be broken, you know that. Go through the list and ask, which one of these is least painful?
1) Violate memory constraint
This is hard to violate infinitely, because you have limited space... so be prepared to also violate the housekeeping constraint from time to time.
The solution to this is the solution already proposed by tmyklebu and mentioned by Lasse Karlsen - growing strings. Just consider binary strings of 0 and 1. You have items A, B and C and moving C between A and B. If there is no space between A and B, i.e. they look
A xxx0
B xxx1
Then just add one more bit for C:
A xxx0
C xxx01
B xxx1
In worst case, you need 1 bit after every operation. You can also work on bytes, not bits. Then in the worst case, you will have to add one byte for every 8 operations. It's all the same. And, it can be easily seen that this solution cannot be beaten. You must add at least one bit, and you cannot add less. In other words, no matter how the solution is complex, it can't be better than this.
Pros:
you have one update per item
can compare any two elements, but slow
Cons:
comparing or sorting will get very very slow as the strings grow
there will be a housekeeping
2) Violate one item modified constraint
This leads to the original linked-list solution. Also, there are plenty of balanced tree data structures, which are even better if you need to look up or compare items (which you didn't mention).
These can go with 3 items modified, balanced trees sometimes need more (when balance operations are needed), but as it is amortized O(1), in a long row of operations the number of modifications per operation is constant. In your case, I would use tree solution only if you need to look up or compare items. Otherwise, the linked-list solution rocks. Throwing it out just because they need 3 operations instead of 1? C'mon :)
Pros:
optimal memory use
fast generation of ordered list (one linear pass), no need to sort
fast operations
no housekeeping
Cons:
cannot easily compare two items. Can easily generate the order of all the items, but given two items randomly, comparing them will take O(N) for list and O(log N) for balanced trees.
3 modified items instead of 1 (... letting up to you how much of a "con" this is)
3) Violate "no housekeeping" constraint
These are the solution with integers and floats, best described by Lasse Karlsen here. Also, the solutions from point 1) will fall here :). The key question was already mentioned by Lasse:
How often will housekeeping have to take place?
If you will use k-bit integers, then from the optimal state, when items are spread evenly in the integer space, the housekeeping will have to take place every k - log N operations, in the worst-case. You might then use more ore less sophisticated algorithms to restrict the number of items you "housekeep".
Pros:
optimal memory use
fast operation
can compare any two elements
one item modified per operation
Cons:
housekeeping
Conclusion - hope never dies
I think the best way, and the answers here prove that, is to decide which one of those constraints is least pain and just take one of those simple solutions formerly frowned upon.
But, hope never dies. When writing this, I realized that there would be your desired solution, if we just were able to ask the server!! Depends on the type of the server of course, but the classical SQL server already has the trees/linked-list implemented - for indices. The server is already doing the operations like "move this item before this one in the tree"!! But the server is doing based on the data, not based on our request. If we were able somehow to ask server to do this without the need to create perverse, endlessly growing data, that would be your desired solution! As I said, the server already does it - the solution is sooo close, but so far. If you can write your own server, you can do it :-)
#tmyklebu has the answer, but he never quite got to the punch line: The answer to your question is "no" unless you are willing to accept a worst case key length of n-1 bits to store n items.
This means that total key storage for n items is O(n^2).
There is an "adversary" information-theoretic argument that says no matter what scheme for assigning keys you choose for a database of n items, I can always come up with a series of n item re-positionings ("Move item k to position p.") that will force you to use a key with n-1 bits. Or by extension, if we start with an empty database, and you give me items to insert, I can choose a sequence of insertion positions that will require you to use at least zero bits for the first, one for the second, etc. indefinitely.
Edit
I earlier had an idea here about using rational numbers for keys. But it was more expensive than just adding one bit of length to split the gap between pairs of keys that differ by one. So I've removed it.
You can also interpret option 3 as storing positions as an unbounded-length string. That way you don't "run out of decimal places" or anything of that nature. Give the first item, say 'foo', position 1. Recursively partition your universe into "the stuff that's less than foo", which get a 0 prefix, and "the stuff that's bigger than foo", which get a 1 prefix.
This sucks in a lot of ways, notably that the position of an object can need as many bits to represent as you've done object moves.
I was fascinated by this question, so I started working on an idea. Unfortunately, it's complicated (you probably knew it would be) and I don't have time to work it all out. I just thought I'd share my progress.
It's based on a doubly-linked list, but with extra bookkeeping information in every moved item. With some clever tricks, I suspect that each of the n items in the set will require less than O(n) extra space, even in the worst case, but I have no proof of this. It will also take extra time to figure out the view order.
For example, take the following initial configuration:
A (-,B|0)
B (A,C|0)
C (B,D|0)
D (C,E|0)
E (D,-|0)
The top-to-bottom ordering is derived purely from the meta-data, which consists of a sequence of states (predecessor,successor|timestamp) for each item.
When moving D between A and B, you push a new state (A,B|1) to the front of its sequence with a fresh timestamp, which you get by incrementing a shared counter:
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
As you see, we keep the old information around in order to connect C to E.
Here is roughly how you derive the proper order from the meta-data:
You keep a pointer to A.
A agrees it has no predecessor. So insert A. It leads you to B.
B agrees it wants to be successor to A. So insert B after A. It leads you to C.
C agrees it wants to be successor to B. So insert C after B. It leads you to D.
D disagrees. It wants to be successor to A. Start recursion to insert it and find the real successor:
D wins from B because it has a more recent timestamp. Insert D after A. It leads you to B.
B is already D's successor. Look back in D's history, which leads you to E.
E agrees it wants to be successor to D with timestamp 0. So return E.
So the successor is E. Insert E after C. It tells you it has no successor. You are finished.
This is not exactly an algorithm yet, because it doesn't cover all cases. For example, when you move an item forwards instead of backwards. When moving B between D and E:
A (-,B|0)
C (B,D|0)
D (C,E|0)
B (D,E|1)(A,C|0)
E (D,-|0)
The 'move' operation is the same. But the algorithm to derive the proper order is a bit different. From A it will run into B, able to get the real successor C from it, but with no place to insert B itself yet. You can keep it in reserve as a candidate for insertion after D, where it will eventually match timestamps against E for the privilege of that position.
I wrote some Angular.js code on Plunker that can be used as a starting-point to implement and test this algorithm. The relevant function is called findNext. It doesn't do anything clever yet.
There are optimizations to reduce the amount of metadata. For example, when moving an item away from where it was recently placed, and its neighbors are still linked of their own accord, you won't have to preserve its newest state but can just replace it. And there are probably situations where you can discard all of an item's sufficiently old states (when you move it).
It's a shame I don't have time to fully work this out. It's an interesting problem.
Good luck!
Edit: I felt I needed to clarify the above-mentioned optimization ideas. First, there is no need to push a new history configuration if the original links still hold. For example, it is fine to go from here (moved D between A and B):
A (-,B|0)
D (A,B|1) (C,E|0)
B (A,C|0)
C (B,D|0)
E (D,-|0)
to here (then moved D between B and C):
A (-,B|0)
B (A,C|0)
D (B,C|2) (C,E|0)
C (B,D|0)
E (D,-|0)
We are able to discard the (A,B|1) configuration because A and B were still connected by themselves. Any number of 'unrelated' movements can come inbetween without changing that.
Secondly, imagine that eventually C and E are moved away from each other, so the (C,E|0) configuration can be dropped the next time D is moved. This is trickier to prove, though.
All of this considered, I believe there is a good chance that the list requires less than O(n+k) space (n being the number of items in the list, k being the number of operations) in the worst case; especially in the average case.
The way to prove any of this is to come up with a simpler model for this data-structure, most likely based on graph theory. Again, I regret that I don't have time to work on this.
Your best option is "Option 3", although "non-integer" doesn't necessarily have to be involved.
"Non-integer" can mean anything that have some kind of accuracy definition, which means:
Integers (you just don't use 1, 2, 3, etc.)
Strings (you just tuck on more characters to ensure the proper "sort order")
Floating point values (adding more decimal points, somewhat the same as strings)
In each case you're going to have accuracy problems. For floating point types, there might be a hard limit in the database engine, but for strings, the limit will be the amount of space you allow for this. Please note that your question can be understood to mean "with no limits", meaning that for such a solution to work, you really need infinite accuracy/space for the keys.
However, I think that you don't need that.
Let's assume that you initially allocate every 1000th index to each row, meaning you will have:
1000 A
2000 B
3000 C
4000 D
... and so on
Then you move as follows:
D up between A and B (gets index 1500)
C up between A and D (gets index 1250)
B up between A and C (gets index 1125)
D up between A and B (gets index 1062)
C up between A and D (gets index 1031)
B up between A and C (gets index 1015)
D up between A and B (gets index 1007)
C up between A and D (gets index 1004)
B up between A and C (gets index 1002)
D up between A and B (gets index 1001)
At this point, the list looks like this:
1000 A
1001 D
1002 B
1004 C
Now, then you want to move C up between A and D.
This is currently not possible, so you're going to have to renumber some items.
You can get by by updating B to have number 1003, trying to update the minimum number of rows, and thus you get:
1000 A
1001 C
1002 D
1003 B
but now, if you want to move B up between A and C, you're going to renumber everything except A.
The question is this: How likely is it that you have this pathological sequence of events?
If the answer is very likely then you will have problems, regardless of what you do.
If the answer is likely seldom, then you might decide that the "problems" with the above approach are manageable. Note that renumbering and ordering more than one row will likely be the exceptions here, and you would get something like "amortized 1 row updated per move". Amortized means that you spread the cost of those occasions where you have to update more than one row out over all the other occasions where you don't.
What if you store the original order and don't change it after saving it once and then store the number of increments up the list or down the list?
Then by moving something up 3 levels you would store this action only.
in the database you can then order by a mathematically counted column.
First time insert:
ord1 | ord2 | value
-----+------+--------
1 | 0 | A
2 | 0 | B
3 | 0 | C
4 | 0 | D
5 | 0 | E
6 | 0 | F
Update order, move D up 2 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
3 | 0 | C | 3
4 | -2 | D | 2
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
2 | 0 | B | 2
4 | -2 | D | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Order by ord1 + ord2 ASC, ord2 ASC
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
5 | 0 | E | 5
6 | 0 | F | 6
Move E up 4 levels
ord1 | ord2 | value | ord1 + ord2
-----+------+-------+-------------
5 | -4 | E | 1
1 | 0 | A | 1
4 | -2 | D | 2
2 | 0 | B | 2
3 | 0 | C | 3
6 | 0 | F | 6
Something like relative ordering, where ord1 is the absolute order while ord2 is the relative order.
Along with the same idea of just storing the history of movements and sorting based on that.
Not tested, not tried, just wrote down what I thought at this moment, maybe it can point you in some direction :)
I am unsure if you will call this cheating, but why not create a separate page list resource that references the page resources?
If you change the order of the pages you need not update any of the pages, just the list that stores the order if the IDs.
Original page list
[ford, clarkson, captain_slow, audi, hamster, vw]
Update to
[ford, hamster, clarkson, captain_slow, audi, vw]
Leave the page resources untouched.
You could always store the ordering permutation separately as a ln(num_records!)/ln(2) bit bitstring and figure out how to transform/CRUD that yourself so that you'd only need to update a single bit for simple operations, if updating 2/3 records is not good enough for you.
What about the following very simple algorithm:
(let's take the analogy with page numbers in a book)
If you move a page to become the "new" page 3, you now have "at least" one page 3, possibly two, or even more. So, which one is the "right" page 3?
Solution: the "newest". So, we make use of the fact that a record also has an "updated date/time", to determine who the real page 3 is.
If you need to represent the entire list in its right order, you have to sort with two keys, one for the page number, and one for the "updated date/time" field.

Algorithm for multicriterial arrangement

Let me describe problem in a form of a small fiction story.
The story
In a Brave New World new cities are built in a couple of days and only need to be populated. Moreover, there's no more long boring hiring process, no interviews and subjective decisions - every person passes several tests and their results are used to find best employees.
When new city is built, number of companies place their offices there and ask Super Mind to find best employees for them given a way to calculate person's score for their particular company. People on their side ask Super Mind to find work for them. They give him list of companies where they would like to work together with corresponding priorities. Super Mind is very humanistic, so its task is to find such arrangement that people get to the best companies they want, even if some companies will left without employees at all.
Formal definition
Now let me define the task more formally.
E - number of employees seeking for a job.
C - number of companies.
S(e,c) - score of employee e for company c.
Pr(e,c) - priority of company c in a personal "wishlist" of employee e.
P(c) - # of positions available in company c.
Task: obtain list of (e, c) tuples given following conditions:
employees with higher S(e,c) should always go first (e.g. if there's only one position left in company c and there are 2 candidates for it, it should be guaranteed that employee with higher score gets to this position).
employees should get to the company with highest priority available for them.
My algorithm
The only algorithm I can think of that guarantees all conditions is as follows. First I create list of all possible applications from employees to companies (A(e,c,s,p)), where s is a score of employee e for company c and p is company priority for this employee. Then I sort all applications by total score and run next recursive procedure:
def arrange(As, Ps, not_approved, approved):
# As - list of applications left
# Ps - map of type (company -> # of positions left)
# not_approved - set of not approved applications
# approved - set of approved applications (hold intermediate result)
if (empty(As))
return approved
a = head(As)
As_rest = tail(As)
if (cant_be_hired(a)) # if no places left in company from this application
return arrange(As_rest, Ps, not_approved + a, approved)
else if (highest_priority(a)) # if this application has highest of left priorities
return arrange(As_rest, Ps(c) - 1, not_approved, approved + a)
else
# if application can be accepted, but it has higher priorities left,
# check what will happen if we do not accept this application
check_result = arrange(As_left, Ps, not_approved + a, approved)
if (employee_is_hired_for_better_job(a, check_result))
# if employee can be hired to a job with higher priority,
# just return check_result - it is already an answer
return check_result
else
# otherwise accept this application and proceed for rest of them
return arrange(As_rest, Ps, not_approved, approved + a)
But, of course, this algorithm has very large computational complexity. Dynamic programming with caching check results helps a bit, but this is still too slow.
I was thinking of some kind of conditional optimization algorithm that always converges, however I'm not so closely familiar with this field to find appropriate one.
So, is there better algorithm?

Resources