Find k-th element in skiplist - explanation needed - algorithm

We're learning about skiplists at my university and we have to find k-th element in the skiplist. I haven't found anything about this in the internet, since skiplist in not really a popular data structure. W. Pugh in his original article wrote:
Each element x has an index pos(x). We use this value in our invariants but do not store it. The index of the header is zero, the index of the first element is one and so on. Associated with each forward pointer is a measurement, fDistance, of the distance traversed by that pointer:
x→fDistance[i] = pos(x→forward[i]) – pos(x).
Note that the distance traversed by a level 1 pointer is always 1, so some storage economy is possible here at the cost of a slight increase in the complexity of the algorithms.
SearchByPosition(list, k)
if k < 1 or k > size(list) then return bad-index
x := list→header
pos := 0
-- loop invariant: pos = pos(x)
for i := list→level downto 1 do
while pos + x→fDistance[i] ≤ k do
pos := pos + x→fDistance[i]
x := x→forward[i]
return x→value
The problem is, I still don't get what is going on here. How do we know positions of elements without storing them? How do we calculate fDistance from pos(x) if we don't store it? If we go from the highest level of the skiplist, how do we know how many nodes on level 0 (or 1, the lowest one anyway) we skip this way?

I'm going to assume you're referring to how to find the k-th smallest (or largest) element in a skip list. This is a rather standard assumption I think, otherwise you have to clarify what you mean.
I'll refer to the GIF on wikipedia in this answer: https://en.wikipedia.org/wiki/Skip_list
Let's say you want to find the k = 5 smallest element.
You start from the highest level (4 in the figure). How many elements would you skip from 30 to NIL? 6 (we also count the 30). That's too much.
Go down a level. How many skipped from 30 to 50? 2: 30 and 40.
So we reduced the problem to finding the k = 5 - 2 = 3 smallest element starting at 50 on level 3.
How many skipped from 50 to NIL? 4, that's one too many.
Go down a level. How many skipped from 50 to 70? 2. Now find the 3 - 2 = 1 smallest element starting from 70 on level 2.
How many skipped from 70 to NIL? 2, one too many.
From 70 to 90 on level 1? 1 (itself). So the answer is 70.
So you need to store how many nodes are skipped for each node at each level and use that extra information in order to get an efficient solution. That seems to be what fDistance[i] does in your code.

Related

Replace two elements with their absolute difference and generate the minimum possible element in array

I have an array of size n and I can apply any number of operations(zero included) on it. In an operation, I can take any two elements and replace them with the absolute difference of the two elements. We have to find the minimum possible element that can be generated using the operation. (n<1000)
Here's an example of how operation works. Let the array be [1,3,4]. Applying operation on 1,3 gives [2,4] as the new array.
Ex: 2 6 11 3 => ans = 0
This is because 11-6 = 5 and 5-3 = 2 and 2-2 = 0
Ex: 20 6 4 => ans = 2
Ex: 2 6 10 14 => ans = 0
Ex: 2 6 10 => ans = 2
Can anyone tell me how can I approach this problem?
Edit:
We can use recursion to generate all possible cases and pick the minimum element from them. This would have complexity of O(n^2 !).
Another approach I tried is Sorting the array and then making a recursion call where the either starting from 0 or 1, I apply the operations on all consecutive elements. This will continue till their is only one element left in the array and we can return the minimum at any point in the recursion. This will have a complexity of O(n^2) but doesn't necessarily give the right answer.
Ex: 2 6 10 15 => (4 5) & (2 4 15) => (1) & (2 15) & (2 11) => (13) & (9). The minimum of this will be 1 which is the answer.
When you choose two elements for the operation, you subtract the smaller one from the bigger one. So if you choose 1 and 7, the result is 7 - 1 = 6.
Now having 2 6 and 8 you can do:
8 - 2 -> 6 and then 6 - 6 = 0
You may also write it like this: 8 - 2 - 6 = 0
Let"s consider different operation: you can take two elements and replace them by their sum or their difference.
Even though you can obtain completely different values using the new operation, the absolute value of the element closest to 0 will be exactly the same as using the old one.
First, let's try to solve this problem using the new operations, then we'll make sure that the answer is indeed the same as using the old ones.
What you are trying to do is to choose two nonintersecting subsets of initial array, then from sum of all the elements from the first set subtract sum of all the elements from the second one. You want to find two such subsets that the result is closest possible to 0. That is an NP problem and one can efficiently solve it using pseudopolynomial algorithm similar to the knapsack problem in O(n * sum of all elements)
Each element of initial array can either belong to the positive set (set which sum you subtract from), negative set (set which sum you subtract) or none of them. In different words: each element you can either add to the result, subtract from the result or leave untouched. Let's say we already calculated all obtainable values using elements from the first one to the i-th one. Now we consider i+1-th element. We can take any of the obtainable values and increase it or decrease it by the value of i+1-th element. After doing that with all the elements we get all possible values obtainable from that array. Then we choose one which is closest to 0.
Now the harder part, why is it always a correct answer?
Let's consider positive and negative sets from which we obtain minimal result. We want to achieve it using initial operations. Let's say that there are more elements in the negative set than in the positive set (otherwise swap them).
What if we have only one element in the positive set and only one element in the negative set? Then absolute value of their difference is equal to the value obtained by using our operation on it.
What if we have one element in the positive set and two in the negative one?
1) One of the negative elements is smaller than the positive element - then we just take them and use the operation on them. The result of it is a new element in the positive set. Then we have the previous case.
2) Both negative elements are smaller than the positive one. Then if we remove bigger element from the negative set we get the result closer to 0, so this case is impossible to happen.
Let's say we have n elements in the positive set and m elements in the negative set (n <= m) and we are able to obtain the absolute value of difference of their sums (let's call it x) by using some operations. Now let's add an element to the negative set. If the difference before adding new element was negative, decreasing it by any other number makes it smaller, that is farther from 0, so it is impossible. So the difference must have been positive. Then we can use our operation on x and the new element to get the result.
Now second case: let's say we have n elements in the positive set and m elements in the negative set (n < m) and we are able to obtain the absolute value of difference of their sums (again let's call it x) by using some operations. Now we add new element to the positive set. Similarly, the difference must have been negative, so x is in the negative set. Then we obtain the result by doing the operation on x and the new element.
Using induction we can prove that the answer is always correct.

Maximum Value taken by thief

Consider we have a sacks of gold and thief wants to get the maximum gold. Thief can take the gold to get maximum by,
1) Taking the Gold from contiguous sacks.
2) Thief should take the same amount of gold from all sacks.
N Sacks 1 <= N <= 1000
M quantity of Gold 0 <= M <= 100
Sample Input1:
3 0 5 4 4 4
Output:
16
Explanation:
4 is the minimum amount he can take from the sacks 3 to 6 to get the maximum value of 16.
Sample Input2:
2 4 3 2 1
Output:
8
Explanation:
2 is the minimum amount he can take from the sacks 1 to 4 to get the maximum value of 8.
I approached the problem using subtracting the values from array and taking the transition point from negative to positive, but this doesn't solves the problem.
EDIT: code provided by OP to find the index:
int temp[6];
for(i=1;i<6;i++){
for(j=i-1; j>=0;j--) {
temp[j] = a[j] - a[i];
}
}
for(i=0;i<6;i++){
if(temp[i]>=0) {
index =i;
break;
}
}
The best amount of gold (TBAG) taken from every sack is equal to weight of some sack. Let's put indexes of candidates in a stack in order.
When we meet heavier weight (than stack contains), it definitely continues "good sequence", so we just add its index to the stack.
When we meet lighter weight (than stack top), it breaks some "good sequences" and we can remove heavier candidates from the stack - they will not have chance to be TBAG later. Remove stack top until lighter weight is met, calculate potentially stolen sum during this process.
Note that stack always contains indexes of strictly increasing sequence of weights, so we don't need to consider items before index at the stack top (intermediate AG) in calculation of stolen sum (they will be considered later with another AG value).
for idx in Range(Sacks):
while (not Stack.Empty) and (Sacks[Stack.Peek] >= Sacks[idx]): //smaller sack is met
AG = Sacks[Stack.Pop]
if Stack.Empty then
firstidx = 0
else
firstidx = Stack.Peek + 1
//range_length * smallest_weight_in_range
BestSUM = MaxValue(BestSUM, AG * (idx - firstidx))
Stack.Push(idx)
now check the rest:
repeat while loop without >= condition
Every item is pushed and popped once, so linear time and space complexity.
P.S. I feel that I've ever seen this problem in another formulation...
I see two differents approaches for the moment :
Naive approach: For each pair of indices (i,j) in the array, compute the minimum value m(i,j) of the array in the interval (i,j) and then compute score(i,j) = |j-i+1|*m(i,j). Take then the maximum score over all the pairs (i,j).
-> Complexity of O(n^3).
Less naive approach:
Compute the set of values of the array
For each value, compute the maximum score it can get. For that, you just have to iterate once over all the values of the array. For example, when your sample input is [3 0 5 4 4 4] and the current value you are looking is 3, then it will give you a score of 12. (You'll first find a value of 3 thanks to the first index, and then a score of 12 due to indices from 2 to 5).
Take the maximum over all values found at step 2.
-> Complexity is here O(n*m), since you have to do at most m times the step 2, and the step 2 can be done in O(n).
Maybe there is a better complexity, but I don't have a clue yet.

How to adapt Fenwick tree to answer range minimum queries

Fenwick tree is a data-structure that gives an efficient way to answer to main queries:
add an element to a particular index of an array update(index, value)
find sum of elements from 1 to N find(n)
both operations are done in O(log(n)) time and I understand the logic and implementation. It is not hard to implement a bunch of other operations like find a sum from N to M.
I wanted to understand how to adapt Fenwick tree for RMQ. It is obvious to change Fenwick tree for first two operations. But I am failing to figure out how to find minimum on the range from N to M.
After searching for solutions majority of people think that this is not possible and a small minority claims that it actually can be done (approach1, approach2).
The first approach (written in Russian, based on my google translate has 0 explanation and only two functions) relies on three arrays (initial, left and right) upon my testing was not working correctly for all possible test cases.
The second approach requires only one array and based on the claims runs in O(log^2(n)) and also has close to no explanation of why and how should it work. I have not tried to test it.
In light of controversial claims, I wanted to find out whether it is possible to augment Fenwick tree to answer update(index, value) and findMin(from, to).
If it is possible, I would be happy to hear how it works.
Yes, you can adapt Fenwick Trees (Binary Indexed Trees) to
Update value at a given index in O(log n)
Query minimum value for a range in O(log n) (amortized)
We need 2 Fenwick trees and an additional array holding the real values for nodes.
Suppose we have the following array:
index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
value 1 0 2 1 1 3 0 4 2 5 2 2 3 1 0
We wave a magic wand and the following trees appear:
Note that in both trees each node represents the minimum value for all nodes within that subtree. For example, in BIT2 node 12 has value 0, which is the minimum value for nodes 12,13,14,15.
Queries
We can efficiently query the minimum value for any range by calculating the minimum of several subtree values and one additional real node value. For example, the minimum value for range [2,7] can be determined by taking the minimum value of BIT2_Node2 (representing nodes 2,3) and BIT1_Node7 (representing node 7), BIT1_Node6 (representing nodes 5,6) and REAL_4 - therefore covering all nodes in [2,7]. But how do we know which sub trees we want to look at?
Query(int a, int b) {
int val = infinity // always holds the known min value for our range
// Start traversing the first tree, BIT1, from the beginning of range, a
int i = a
while (parentOf(i, BIT1) <= b) {
val = min(val, BIT2[i]) // Note: traversing BIT1, yet looking up values in BIT2
i = parentOf(i, BIT1)
}
// Start traversing the second tree, BIT2, from the end of range, b
i = b
while (parentOf(i, BIT2) >= a) {
val = min(val, BIT1[i]) // Note: traversing BIT2, yet looking up values in BIT1
i = parentOf(i, BIT2)
}
val = min(val, REAL[i]) // Explained below
return val
}
It can be mathematically proven that both traversals will end in the same node. That node is a part of our range, yet it is not a part of any subtrees we have looked at. Imagine a case where the (unique) smallest value of our range is in that special node. If we didn't look it up our algorithm would give incorrect results. This is why we have to do that one lookup into the real values array.
To help understand the algorithm I suggest you simulate it with pen & paper, looking up data in the example trees above. For example, a query for range [4,14] would return the minimum of values BIT2_4 (rep. 4,5,6,7), BIT1_14 (rep. 13,14), BIT1_12 (rep. 9,10,11,12) and REAL_8, therefore covering all possible values [4,14].
Updates
Since a node represents the minimum value of itself and its children, changing a node will affect its parents, but not its children. Therefore, to update a tree we start from the node we are modifying and move up all the way to the fictional root node (0 or N+1 depending on which tree).
Suppose we are updating some node in some tree:
If new value < old value, we will always overwrite the value and move up
If new value == old value, we can stop since there will be no more changes cascading upwards
If new value > old value, things get interesting.
If the old value still exists somewhere within that subtree, we are done
If not, we have to find the new minimum value between real[node] and each tree[child_of_node], change tree[node] and move up
Pseudocode for updating node with value v in a tree:
while (node <= n+1) {
if (v > tree[node]) {
if (oldValue == tree[node]) {
v = min(v, real[node])
for-each child {
v = min(v, tree[child])
}
} else break
}
if (v == tree[node]) break
tree[node] = v
node = parentOf(node, tree)
}
Note that oldValue is the original value we replaced, whereas v may be reassigned multiple times as we move up the tree.
Binary Indexing
In my experiments Range Minimum Queries were about twice as fast as a Segment Tree implementation and updates were marginally faster. The main reason for this is using super efficient bitwise operations for moving between nodes. They are very well explained here. Segment Trees are really simple to code so think about is the performance advantage really worth it? The update method of my Fenwick RMQ is 40 lines and took a while to debug. If anyone wants my code I can put it on github. I also produced a brute and test generators to make sure everything works.
I had help understanding this subject & implementing it from the Finnish algorithm community. Source of the image is http://ioinformatics.org/oi/pdf/v9_2015_39_44.pdf, but they credit Fenwick's 1994 paper for it.
The Fenwick tree structure works for addition because addition is invertible. It doesn't work for minimum, because as soon as you have a cell that's supposed to be the minimum of two or more inputs, you've lost information potentially.
If you're willing to double your storage requirements, you can support RMQ with a segment tree that is constructed implicitly, like a binary heap. For an RMQ with n values, store the n values at locations [n, 2n) of an array. Locations [1, n) are aggregates, with the formula A(k) = min(A(2k), A(2k+1)). Location 2n is an infinite sentinel. The update routine should look something like this.
def update(n, a, i, x): # value[i] = x
i += n
a[i] = x
# update the aggregates
while i > 1:
i //= 2
a[i] = min(a[2*i], a[2*i+1])
The multiplies and divides here can be replaced by shifts for efficiency.
The RMQ pseudocode is more delicate. Here's another untested and unoptimized routine.
def rmq(n, a, i, j): # min(value[i:j])
i += n
j += n
x = inf
while i < j:
if i%2 == 0:
i //= 2
else:
x = min(x, a[i])
i = i//2 + 1
if j%2 == 0:
j //= 2
else:
x = min(x, a[j-1])
j //= 2
return x

Maximum gap in unsorted array

I'm following algorithm from here:
http://cgm.cs.mcgill.ca/~godfried/teaching/dm-reading-assignments/Maximum-Gap-Problem.pdf
I dont understand step 2 and 3:
Divide
 the
 interval
 [x­min,
x­max] 
into
(n−1)
 "buckets"
 of
 equal
size
 delta= (x­max
–
x­min)/(n‐1)
For
 each 
of 
the 
remaining 
(n‐2) 
numbers 
determine
 in 
which 
bucket it 
falls
 using 
the 
floor
 function. 
The 
number 
xi 
belongs 
to 
the 
kth
 bucket 
Bk
 if, 
and 
only 
if, 
(xi
‐
x­min)/δ
=
k‐1.
Lets say
a = [13, 4, 7, 2, 9, 17, 18]
Minm: 2 Maxm: 18 n-1: 6.
So my # of buckets will be 6. And delta = (18-2)/6 = 2. That is 6 buckets
having 2 elements into each of them. (Total 12 elements I can have)
Step 2. Que:
If there are only 12 elements where would be my max 18?
Step 3.
For element 18:
as per algorithm it should be in math.floor((17-2)/float(2)) = 7
So 18 should be in 8th block, BUT we have only (n-1) = 6 buckets.
Mystery to me!
EDIT1:
Sorry
Step 3: wrong Math:
math.floor((17-2)/float(2)) = 5
Still need to figure out where does minimum and maximum goes.
EDIT2:
As per answer by Miljen Mikic:
He was right, my question is "What we do with maximum and minimum"
And in step 6:
In
 L
 find 
the 
maximum
 distance
 between
 a 
pair 
of 
consecutive
 minimum
 and
maximum
(xi­max,
xj­min), 
where 
j
>
i.
How come j > i? i.e. max from next bucket and min from current bucket.
In the algorithm you cited, you don't put minimum and maximum in the buckets. Pay attention to the Note after Step 5:
Note: Since there are n-1 buckets and only n-2 numbers..
If you put minimum and maximum in some buckets, then you would have had n numbers, not n-2. The real question now is: what to do with minimum and maximum? Actually, step 6 of the algorithm should be clarified a little bit more. When you examine the list L, you should start with x-min and compare it with x1-min, and you should end by comparing x(n-1)-max and x-max, because the maximum gap can actually include minimum or maximum, like you get e.g. in this example: [1,7,3,2]. Of course, these two additional comparisons still give you linear time complexity.
Note that you can change the algorithm slightly by putting minimum and maximum in buckets as well (by the exact same formula!) and then you would have n numbers and n buckets. Why? Minimum always goes in the first bucket (see the formula), and maximum needs to go in the n-th bucket, which didn't exist previously, so we have one extra bucket if we apply this change. This means that in this case you cannot always apply Pigeonhole principle, however it still holds that the maximum 
distance 
between 
a 
pair 
of 
consecutive elements 
must 
be 
at
 least 
the 
length 
of 
the 
bucket. How come? If at least one bucket contains two elements, then there must be some empty bucket and this is clear. Otherwise, all buckets contain exactly one element; this means that the first bucket contains the minimum, and the second bucket contains an element whose value is at least x_min + δ, so the difference between this element and x_min is at least δ, the 
length 
of 
the 
bucket. Why the element in the second bucket has to be at least x_min + δ? If it is smaller than that, e.g. if it's x_min + δ - k, where k > 0, then it will also belong to the first bucket because [((x_min + δ - k) - x_min) / δ] = [(δ - k) / δ] = 0, i.e. not to the second as we assumed!

Finding the best pair of elements that don't exceed a certain weight?

I have a collection of objects, each of which has a weight and a value. I want to pick the pair of objects with the highest total value subject to the restriction that their combined weight does not exceed some threshold. Additionally, I am given two arrays, one containing the objects sorted by weight and one containing the objects sorted by value.
I know how to do it in O(n2) but how can I do it in O(n)?
This is a combinatorial optimization problem, and the fact the values are sorted means you can easily try a branch and bound approach.
I think that I have a solution that works in O(n log n) time and O(n) extra space. This isn't quite the O(n) solution you wanted, but it's still better than the naive quadratic solution.
The intuition behind the algorithm is that we want to be able to efficiently determine, for any amount of weight, the maximum value we can get with a single item that uses at most that much weight. If we can do this, we have a simple algorithm for solving the problem: iterate across the array of elements sorted by value. For each element, see how much additional value we could get by pairing a single element with it (using the values we precomputed), then find which of these pairs is maximum. If we can do the preprocessing in O(n log n) time and can answer each of the above queries in O(log n) time, then the total time for the second step will be O(n log n) and we have our answer.
An important observation we need to do the preprocessing step is as follows. Our goal is to build up a structure that can answer the question "which element with weight less than x has maximum value?" Let's think about how we might do this by adding one element at a time. If we have an element (value, weight) and the structure is empty, then we want to say that the maximum value we can get using weight at most "weight" is "value". This means that everything in the range [0, max_weight - weight) should be set to value. Otherwise, suppose that the structure isn't empty when we try adding in (value, weight). In that case, we want to say that any portion of the range [0, weight) whose value is less than value should be replaced by value.
The problem here is that when we do these insertions, there might be, on iteration k, O(k) different subranges that need to be updated, leading to an O(n2) algorithm. However, we can use a very clever trick to avoid this. Suppose that we insert all of the elements into this data structure in descending order of value. In that case, when we add in (value, weight), because we add the elements in descending order of value, each existing value in the data structure must be higher than our value. This means that if the range [0, weight) intersects any range at all, those ranges will automatically be higher than value and so we don't need to update them. If we combine this with the fact that each range we add always spans from zero to some value, the only portion of the new range that could ever be added to the data structure is the range [weight, x), where x is the highest weight stored in the data structure so far.
To summarize, assuming that we visit the (value, weight) pairs in descending order of value, we can update our data structure as follows:
If the structure is empty, record that the range [0, value) has value "value."
Otherwise, if the highest weight recorded in the structure is greater than weight, skip this element.
Otherwise, if the highest weight recorded so far is x, record that the range [weight, x) has value "value."
Notice that this means that we are always splitting ranges at the front of the list of ranges we have encountered so far. Because of this, we can think about storing the list of ranges as a simple array, where each array element tracks the upper endpoint of some range and the value assigned to that range. For example, we might track the ranges [0, 3), [3, 9), and [9, 12) as the array
3, 9, 12
If we then needed to split the range [0, 3) into [0, 1) and [1, 3), we could do so by prepending 1 to he list:
1, 3, 9, 12
If we represent this array in reverse (actually storing the ranges from high to low instead of low to high), this step of creating the array runs in O(n) time because at each point we just do O(1) work to decide whether or not to add another element onto the end of the array.
Once we have the ranges stored like this, to determine which of the ranges a particular weight falls into, we can just use a binary search to find the largest element smaller than that weight. For example, to look up 6 in the above array we'd do a binary search to find 3.
Finally, once we have this data structure built up, we can just look at each of the objects one at a time. For each element, we see how much weight is left, use a binary search in the other structure to see what element it should be paired with to maximize the total value, and then find the maximum attainable value.
Let's trace through an example. Given maximum allowable weight 10 and the objects
Weight | Value
------+------
2 | 3
6 | 5
4 | 7
7 | 8
Let's see what the algorithm does. First, we need to build up our auxiliary structure for the ranges. We look at the objects in descending order of value, starting with the object of weight 7 and value 8. This means that if we ever have at least seven units of weight left, we can get 8 value. Our array now looks like this:
Weight: 7
Value: 8
Next, we look at the object of weight 4 and value 7. This means that with four or more units of weight left, we can get value 7:
Weight: 7 4
Value: 8 7
Repeating this for the next item (weight six, value five) does not change the array, since if the object has weight six, if we ever had six or more units of free space left, we would never choose this; we'd always take the seven-value item of weight four. We can tell this since there is already an object in the table whose range includes remaining weight four.
Finally, we look at the last item (value 3, weight 2). This means that if we ever have weight two or more free, we could get 3 units of value. The final array now looks like this:
Weight: 7 4 2
Value: 8 7 3
Finally, we just look at the objects in any order to see what the best option is. When looking at the object of weight 2 and value 3, since the maximum allowed weight is 10, we need tom see how much value we can get with at most 10 - 2 = 8 weight. A binary search over the array tells us that this value is 8, so one option would give us 11 weight. If we look at the object of weight 6 and value 5, a binary search tells us that with five remaining weight the best we can do would be to get 7 units of value, for a total of 12 value. Repeating this on the next two entries doesn't turn up anything new, so the optimum value found has value 12, which is indeed the correct answer.
Hope this helps!
Here is an O(n) time, O(1) space solution.
Let's call an object x better than an object y if and only if (x is no heavier than y) and (x is no less valuable) and (x is lighter or more valuable). Call an object x first-choice if no object is better than x. There exists an optimal solution consisting either of two first-choice objects, or a first-choice object x and an object y such that only x is better than y.
The main tool is to be able to iterate the first-choice objects from lightest to heaviest (= least valuable to most valuable) and from most valuable to least valuable (= heaviest to lightest). The iterator state is an index into the objects by weight (resp. value) and a max value (resp. min weight) so far.
Each of the following steps is O(n).
During a scan, whenever we encounter an object that is not first-choice, we know an object that's better than it. Scan once and consider these pairs of objects.
For each first-choice object from lightest to heaviest, determine the heaviest first-choice object that it can be paired with, and consider the pair. (All lighter objects are less valuable.) Since the latter object becomes lighter over time, each iteration of the loop is amortized O(1). (See also searching in a matrix whose rows and columns are sorted.)
Code for the unbelievers. Not heavily tested.
from collections import namedtuple
from operator import attrgetter
Item = namedtuple('Item', ('weight', 'value'))
sentinel = Item(float('inf'), float('-inf'))
def firstchoicefrombyweight(byweight):
bestsofar = sentinel
for x in byweight:
if x.value > bestsofar.value:
bestsofar = x
yield (x, bestsofar)
def firstchoicefrombyvalue(byvalue):
bestsofar = sentinel
for x in byvalue:
if x.weight < bestsofar.weight:
bestsofar = x
yield x
def optimize(items, maxweight):
byweight = sorted(items, key=attrgetter('weight'))
byvalue = sorted(items, key=attrgetter('value'), reverse=True)
maxvalue = float('-inf')
try:
i = firstchoicefrombyvalue(byvalue)
y = i.next()
for x, z in firstchoicefrombyweight(byweight):
if z is not x and x.weight + z.weight <= maxweight:
maxvalue = max(maxvalue, x.value + z.value)
while x.weight + y.weight > maxweight:
y = i.next()
if y is x:
break
maxvalue = max(maxvalue, x.value + y.value)
except StopIteration:
pass
return maxvalue
items = [Item(1, 1), Item(2, 2), Item(3, 5), Item(3, 7), Item(5, 8)]
for maxweight in xrange(3, 10):
print maxweight, optimize(items, maxweight)
This is similar to Knapsack problem. I will use naming from it (num - weight, val - value).
The essential part:
Start with a = 0 and b = n-1. Assuming 0 is the index of heaviest object and n-1 is the index of lightest object.
Increase a til objects a and b satisfy the limit.
Compare current solution with best solution.
Decrease b by one.
Go to 2.
Update:
It's the knapsack problem, except there is a limit of 2 items. You basically need to decide how much space you want for the first object and how much for the other. There is n significant ways to split available space, so the complexity is O(n). Picking the most valuable objects to fit in those spaces can be done without additional cost.

Resources