Maximum profit earned on weighted un-directed tree - algorithm

I came across this problem while giving a sample test. The problem was that we have given a tree which is undirected. We can start from any node of our choice. Initially we have power "P" and while going from one node to other node we loose some power "X" (consider as cost of travelling) and earn some profit "Y".
So we need to tell that what is the maximum profit that we can earn with a given power ?
Example: First line contains number of nodes and initial power
Next n-1 lines contains node-node-cost-profit
5 4
1 2 1 2
1 3 2 3
1 4 2 4
4 5 2 2
Answer => 7. We can start from 4 and go to 1 and than to 3.
I have applied DFS on this to get maximum profit earned by traversing every single path.
But is there a way to decrease time ???
from collections import defaultdict
class tree:
def __init__(self,nodes):
self.nodes = nodes
self.graph = defaultdict(list)
def add(self,a,b,charge,profit):
self.graph[a].append([b,charge,profit])
self.graph[b].append([a,charge,profit])
def start(self,power):
maxi = -1
visited = [False for i in range(self.nodes)]
for i in range(1,self.nodes+1):
powers = power
visited[i-1] = True
for j in self.graph[i]:
temp = self.dfs(j,powers,0,visited)
if temp > maxi:
maxi = temp
visited[i-1] = False
return maxi
def dfs(self,node,powers,profit,visited):
v,p,pro=node[0],node[1],node[2]
if powers-p < 0:
return 0
if powers-p == 0:
return profit + pro
profit += pro
powers = powers-p
visited[v-1] = True
tempo = profit
for k in self.graph[v]:
if visited[k[0]-1] == False:
temp = self.dfs(k,powers,tempo,visited)
if temp > profit:
profit = temp
visited[v-1] = False
return profit
t = tree(5)
t.add(1,2,1,2)
t.add(1,3,2,3)
t.add(1,4,2,4)
t.add(4,5,2,2)
print(t.start(4))

You want to find all paths that have length up to P and take maximum of their profits. You can achieve it in O(n log^2 n) time using centroid decomposition.
Consider all subtrees that you create by deleting a centroid C from the tree. Let's say you have found all paths of length less or equal P and taken a maximum of them (now you'll only consider paths that contain C). Using DFS calculate distance and profit from C to each other node in the tree and store them as pairs in multiset.
For each subtree do:
delete from multiset every pair of values of node from that subtree - O(n log n)
copy all the pairs from the multiset to a list L1 - O(n)
create list L2 of pairs (distance, profit) from the current subtree and sort it by distance in decreasing order - O(n log n)
create variable maxx = 0 and i = 0
for each pair X in L2:
while L1[i] <= P - X.distance do: maxx = max(maxx, L1[i].profit), ++i
result = max(result, maxx + X.profit)
all of it will take at most O(n)
insert all pairs from L2 back to multiset - O(n log n)
Time complexity: O(n log n)
Now you have calculated maximum profit of all paths of length equal or less P in the tree. To get all the values in subtrees run the same algorithm recursively. Since there are at most O(log n) layers using centroid decomposition total complexity is O(n log^2 n).

Related

Fibonacci sums on a tree

Given a tree with n nodes (n can be as large as 2 * 10^5), where each node has a cost associated with it, let us define the following functions:
g(u, v) = the sum of all costs on the simple path from u to v
f(n) = the (n + 1)th Fibonacci number (n + 1 is not a typo)
The problem I'm working on requires me to compute the sum of f(g(u, v)) over all possible pairs of nodes in the tree modulo 10^9 + 7.
As an example, let's take a tree with 3 nodes.
without loss of generality, let's say node 1 is the root, and its children are 2 and 3
costs[1] = 2, cost[2] = 1, cost[3] = 1
g(1, 1) = 2; f(2) = 2
g(2, 2) = 1; f(1) = 1
g(3, 3) = 1; f(1) = 1
g(1, 2) = 3; f(3) = 3
g(2, 1) = 3; f(3) = 3
g(1, 3) = 3; f(3) = 3
g(3, 1) = 3; f(3) = 3
g(2, 3) = 4; f(4) = 5
g(3, 2) = 4; f(4) = 5
Summing all of the values, and taking the result modulo 10^9 + 7 gives 26 as the correct answer.
My attempt:
I implemented an algorithm to compute g(u, v) in O(log n) by finding the lowest common ancestor using a sparse table.
For the finding of the appropriate Fibonacci values, I tried two approaches, namely using exponentiation on the matrix form and another by noticing that the sequence modulo 10^9 + 7 is cyclical.
Now comes the extremely tricky part. No matter how I do the above computations, I still end up going to up to O(n^2) pairs when calculating the sum of all possible f(g(u, v)). I mean there's the obvious improvement of only going up to n * (n - 1) / 2 pairs but that's still quadratic.
What am I missing? I've been at it for several hours, but I can't see a way to get that sum without actually producing a quadratic algorithm.
To know how many times the cost of a node X is to be included in the total sum, we divide the other nodes into 3 (or more) groups:
the subtree A connected to the left of X
the subtree B connected to the right of X
(subtrees C, D... if the tree is not binary)
all other nodes Y, connected through X's parent
When two nodes belong to different groups, their simple path goes through X. So the number of simple paths that go through X is:
#Y + #A × (N - #A) + #B × (N - #B)
So by counting the total number of nodes N, and the size of the subtrees under X, you can calculate how many times the cost of node X should be included in the total sum. Do this for every node and you have the total cost.
The code for this could be straightforward. I'll assume that the total number of nodes N is known, and that you can add properties to the nodes (both of these assumptions simplify the algorithm, but it can be done without them).
We'll add a child_count to store the number of descendants of the node, and a path_count to store the number of simple paths that the node is part of; both are initialised to zero.
For each node, starting from the root:
If not all children have been visited, go to an unvisited child.
If all children have been visited (or node is leaf):
Increment child_count.
Increase path_count with N - child_count.
Add this node's path_count × cost to the total cost.
If the current node is the root, we're done; otherwise:
Increase the parent node's child_count with this node's child_count.
Increase the parent node's path_count with this node's child_count × (N - child_count).
Go to the parent node.
The below algorithm's running time is O(n^3).
Tree is a strongly connected graph without loops. So when we want to get all possible pairs' costs, we are trying to find the shortest paths for all pairs. Thus, we can use Dijkstra's idea and dynamic programming approach for this problem (I took it from Weiss's book). Then we apply Fibonacci function to the cost, assuming that we already have a table to look up.
Dijkstra's idea: We start from the root and search all simple paths from the root to all other nodes and then do that for other vertices on the graph.
Dynamic programming approach: We use a 2D matrix D[][] to represent the lowest path/cost (They could be used exchangeably.) between node i and node j. Initially, D[i][i] is already set. If node i and node j is parent/child, D[i][j] = g(i, j), which is the cost between them. If node k is on the path which has lower cost for node i and node j, we can update the D[i][j], i.e., D[i][j] = D[i][k] + D[k][j] if D[i][j] < D[i][k] + D[k][j] else D[i][j].
When done, we check D[][] matrix and apply Fibonacci function to each cell and add them up, and also apply modulo operation.

Find longest sequences with sufficient average score

I have a long list of scores between 0 and 1. How do I efficiently find all contiguous sublists longer than x elements such that the average score in each sublist is not less than y?
E.g., how do I find all contiguous sublists longer than 300 elements such that the average score of these sublists is not less than 0.8?
I'm mainly interested in the LONGEST sublists that fulfill these criteria, not actually all sublists. So I'm looking for all longest sublists.
If you want only the longest such substrings, this can be solved in O(n log n) time by transforming the problem slightly and then binary-searching over maximum solution lengths.
Let the input list of scores be x[1], ..., x[n]. Let's transform this list by subtracting y from each element, to form the list z[1], ..., z[n], whose elements may be positive or negative. Notice that any sublist x[i .. j] has average score at least y if and only if the sum of elements in the corresponding sublist in z (i.e., z[i] + z[i+1] + ... + z[j]) is at least 0. So, if we had a way to compute the maximum sum T of any sublist in z[] efficiently (spoiler: we do), this would, as a side effect, tell us if there is any sublist in x[] that has average score at least y: if T >= 0 then there is at least 1 such sublist, while if T < 0 then there is no sublist in x[] (not even a single-element sublist) that has average score at least y. But this doesn't yet give us all the information we need to answer your original question, since nothing forces the maximum-sum sublist in z to have maximum length: it could well be that a longer sublist exists that has lower overall average, while still having average at least y.
This can be addressed by generalising the problem of finding the sublist with maximum sum: instead of asking for a sublist with maximum sum overall, we will now ask for a sublist having maximum sum among all sublists having length at least some given k. I'll now describe an algorithm that, given a list of numbers z[1], ..., z[n], each of which can be positive or negative, and any positive integer k, will compute the maximum sum of any sublist of z[] having length at least k, as well as the location of a particular sublist that achieves this sum, and has longest possible length among all sublists having this sum. It's a slight generalisation of Kadane's algorithm.
FindMaxSumLongerThan(z[], k):
v = 0 # Sum of the rightmost k numbers in the current sublist
For i from 1 to k:
v = v + z[i]
best = v
bestStart = 1
bestEnd = k
# Now for each i, with k+1 <= i <= n, find the biggest sum ending at position i.
tail = -1 # Will contain the maximum sum among all lists ending at i-k
tailLen = 0 # The length of the longest list having the above sum
For i from k+1 to n:
If tail >= 0:
tail = tail + z[i-k]
tailLen = tailLen + 1
Else:
tail = z[i-k]
tailLen = 1
If tail >= 0:
nonnegTail = tail
nonnegTailLen = tailLen
Else:
nonnegTail = 0
nonnegTailLen = 0
v = v + z[i] - z[i-k] # Slide the window right 1 position
If v + nonnegTail > best:
best = v + nonnegTail
bestStart = i - k - nonnegTailLen + 1
bestEnd = i
The above algorithm takes O(n) time and O(1) space, returning the maximum sum in best and the beginning and ending positions of some sublist that achieves that sum in bestStart and bestEnd, respectively.
How is the above useful? For a given input list x[], suppose we first transform x[] into z[] by subtracting y from each element as described above; this will be the z[] passed into every call to FindMaxSumLongerThan(). We can view the value of best that results from calling the function with z[] and a given minimum sublist length k as a mathematical function of k: best(k). Since FindMaxSumLongerThan() finds the maximum sum of any sublist of z[] having length at least k, best(k) is a nonincreasing function of k. (Say we set k=5 and found that the maximum sum of any sublist is 42; then we are guaranteed to find a total of at least 42 if we try again with k=4 or k=3.) That means we can binary search on k to find the largest k such that best(k) >= 0: that k will then be the longest sublist of x[] that has average value at least y. The resulting bestStart and bestEnd will identify a particular sublist having this property; it's easy to modify the algorithm to find all (at most n -- one per rightmost position) of these sublists without increasing the time complexity.
I think that general solution is always O(N^2). I will demonstrate a code in Python and some optimizations you can implement to increase the performance by several orders of magnitude.
Let's generate some data:
from random import random
scores_list = [random() for i in range(10000)]
scores_len = len(scores_list)
Let's say these are our target values:
# Your average
avg = 0.55
# Your min lenght
min_len = 10
Here is a naive brute force solution
res = []
for i in range(scores_len - min_len):
for j in range(i+min_len, scores_len):
l = scores_list[i:j]
if sum(l) / (j - i) >= avg:
res.append(l)
That will run very slowly because it has to perform 10000^2 (10^8) operations.
Here is how we can do it better. It is still quadratic but there is some tricks wich allows it to perform much much faster:
res = []
i = 0
while i < scores_len - min_len:
j = i + min_len
di = scores_len
dj = 0
current_sum = sum(scores_list[i:j])
while j < scores_len:
current_sum += sum(scores_list[j-dj:j])
current_avg = current_sum/(j - i)
if current_avg >= avg:
res.append(scores_list[i:j])
dj = 1
di = 1
else:
dj = max(1, int((avg * (j - i) - current_sum)/(1 - avg)))
di = min(di, max(1, int(((j-i) * avg - current_sum)/avg)))
j += dj
i += di
For uniform distribution (which we have here) and for given target values it will perform only less than 10^6 operations (~7 * 10^5) and this is by two orders of magnitude less than brute force solution.
So basically if you have a few target sublists it will perform very good. And if you have a lot of them this algorithm will be about the same as a brute force one.

sum of maximum element of sliding window of length K

Recently I got stuck in a problem. The part of algorithm requires to compute sum of maximum element of sliding windows of length K. Where K ranges from 1<=K<=N (N length of an array).
Example if I have an array A as 5,3,12,4
Sliding window of length 1: 5 + 3 + 12 + 4 = 24
Sliding window of length 2: 5 + 12 + 12 = 29
Sliding window of length 3: 12 + 12 = 24
Sliding window of length 4: 12
Final answer is 24,29,24,12.
I have tried to this O(N^2). For each sliding window of length K, I can calculate the maximum in O(N). Since K is upto N. Therefore, overall complexity turns out to be O(N^2).
I am looking for O(N) or O(NlogN) or something similar to this algorithm as N maybe upto 10^5.
Note: Elements in array can be as large as 10^9 so output the final answer as modulo 10^9+7
EDIT: What I actually want to find answer for each and every value of K (i.e. from 0 to N) in overall linear time or in O(NlogN) not in O(KN) or O(KNlogN) where K={1,2,3,.... N}
Here's an abbreviated sketch of O(n).
For each element, determine how many contiguous elements to the left are no greater (call this a), and how many contiguous elements to the right are lesser (call this b). This can be done for all elements in time O(n) -- see MBo's answer.
A particular element is maximum in its window if the window contains the element and only elements among to a to its left and the b to its right. Usefully, the number of such windows of length k (and hence the total contribution of these windows) is piecewise linear in k, with at most five pieces. For example, if a = 5 and b = 3, there are
1 window of size 1
2 windows of size 2
3 windows of size 3
4 windows of size 4
4 windows of size 5
4 windows of size 6
3 windows of size 7
2 windows of size 8
1 window of size 9.
The data structure that we need to encode this contribution efficiently is a Fenwick tree whose values are not numbers but linear functions of k. For each linear piece of the piecewise linear contribution function, we add it to the cell at beginning of its interval and subtract it from the cell at the end (closed beginning, open end). At the end, we retrieve all of the prefix sums and evaluate them at their index k to get the final array.
(OK, have to run for now, but we don't actually need a Fenwick tree for step two, which drops the complexity to O(n) for that, and there may be a way to do step one in linear time as well.)
Python 3, lightly tested:
def left_extents(lst):
result = []
stack = [-1]
for i in range(len(lst)):
while stack[-1] >= 0 and lst[i] >= lst[stack[-1]]:
del stack[-1]
result.append(stack[-1] + 1)
stack.append(i)
return result
def right_extents(lst):
result = []
stack = [len(lst)]
for i in range(len(lst) - 1, -1, -1):
while stack[-1] < len(lst) and lst[i] > lst[stack[-1]]:
del stack[-1]
result.append(stack[-1])
stack.append(i)
result.reverse()
return result
def sliding_window_totals(lst):
delta_constant = [0] * (len(lst) + 2)
delta_linear = [0] * (len(lst) + 2)
for l, i, r in zip(left_extents(lst), range(len(lst)), right_extents(lst)):
a = i - l
b = r - (i + 1)
if a > b:
a, b = b, a
delta_linear[1] += lst[i]
delta_linear[a + 1] -= lst[i]
delta_constant[a + 1] += lst[i] * (a + 1)
delta_constant[b + 2] += lst[i] * (b + 1)
delta_linear[b + 2] -= lst[i]
delta_linear[a + b + 2] += lst[i]
delta_constant[a + b + 2] -= lst[i] * (a + 1)
delta_constant[a + b + 2] -= lst[i] * (b + 1)
result = []
constant = 0
linear = 0
for j in range(1, len(lst) + 1):
constant += delta_constant[j]
linear += delta_linear[j]
result.append(constant + linear * j)
return result
print(sliding_window_totals([5, 3, 12, 4]))
Let's determine for every element an interval, where this element is dominating (maximum). We can do this in linear time with forward and backward runs using stack. Arrays L and R will contain indexes out of the domination interval.
To get right and left indexes:
Stack.Push(0) //(1st element index)
for i = 1 to Len - 1 do
while Stack.Peek < X[i] do
j = Stack.Pop
R[j] = i //j-th position is dominated by i-th one from the right
Stack.Push(i)
while not Stack.Empty
R[Stack.Pop] = Len //the rest of elements are not dominated from the right
//now right to left
Stack.Push(Len - 1) //(last element index)
for i = Len - 2 to 0 do
while Stack.Peek < X[i] do
j = Stack.Pop
L[j] = i //j-th position is dominated by i-th one from the left
Stack.Push(i)
while not Stack.Empty
L[Stack.Pop] = -1 //the rest of elements are not dominated from the left
Result for (5,7,3,9,4) array.
For example, 7 dominates at 0..2 interval, 9 at 0..4
i 0 1 2 3 4
X 5 7 3 9 4
R 1 3 3 5 5
L -1 -1 1 -1 4
Now for every element we can count it's impact in every possible sum.
Element 5 dominates at (0,0) interval, it is summed only in k=1 sum entry
Element 7 dominates at (0,2) interval, it is summed once in k=1 sum entry, twice in k=2 entry, once in k=3 entry.
Element 3 dominates at (2,2) interval, it is summed only in k=1 sum entry
Element 9 dominates at (0,4) interval, it is summed once in k=1 sum entry, twice in k=2, twice in k=3, twice in k=4, once in k=5.
Element 4 dominates at (4,4) interval, it is summed only in k=1 sum entry.
In general element with long domination interval in the center of long array may give up to k*Value impact in k-length sum (it depends on position relative to array ends and to another dom. elements)
k 1 2 3 4 5
--------------------------
5
7 2*7 7
3
9 2*9 2*9 2*9 9
4
--------------------------
S(k) 28 32 25 18 9
Note that the sum of coefficients is N*(N-1)/2 (equal to the number of possible windows), the most of table entries are empty, so complexity seems better than O(N^2)
(I still doubt about exact complexity)
The sum of maximum in sliding windows for a given window size can be computed in linear time using a double ended queue that keeps elements from the current window. We maintain the deque such that the first (index 0, left most) element in the queue is always the maximum of the current window.
This is done by iterating over the array and in each iteration, first we remove the first element in the deque if it is no longer in the current window (we do that by checking its original position, which is also saved in the deque together with its value). Then, we remove any elements from the end of the deque that are smaller than the current element, and finally we add the current element to the end of the deque.
The complexity is O(N) for computing the maximum for all sliding windows of size K. If you want to do that for all values of K from 1..N, then time complexity will be O(N^2). O(N) is the best possible time to compute the sum of maximum values of all windows of size K (that is easy to see). To compute the sum for other values of K, the simple approach is to repeat the computation for each different value of K, which would lead to overall time of O(N^2). Is there a better way ? No, because even if we save the result from a computation for one value of K, we would not be able to use it to compute the result for a different value of K, in less then O(N) time. So best time is O(N^2).
The following is an implementation in python:
from collections import deque
def slide_win(l, k):
dq=deque()
for i in range(len(l)):
if len(dq)>0 and dq[0][1]<=i-k:
dq.popleft()
while len(dq)>0 and l[i]>=dq[-1][0]:
dq.pop()
dq.append((l[i],i))
if i>=k-1:
yield dq[0][0]
def main():
l=[5,3,12,4]
print("l="+str(l))
for k in range(1, len(l)+1):
s=0
for x in slide_win(l,k):
s+=x
print("k="+str(k)+" Sum="+str(s))

Finding largest minimum distance among k objects in n possible distinct positions?

What is an efficient way to find largest minimum distance among k objects in n possible distinct positions?
For eg:
N: Number of distinct positions
Lets say N = 5
and the 5 positions are {1,2,4,8,9}
K: Number of objects let say k = 3
So the possible answer (Largest Minimum Distance) would be: 3 if we put objects at {1,4,8} or {1,4,9}
Let's do a binary search over the answer.
For a fixed answer x, we can check whether it is feasible or not using a simple linear greedy algorithm(pick the first element and then iterate over the rest of the array adding the current element if the distance between it and the last picked element is greater than or equal to x). In the end, we just need to check that the number of picked elements is at least k.
The time complexity is O(n * log MAX_A), where MAX_A is the maximum element of the array.
Here is a pseudo code for this algorithm:
def isFeasible(positions, dist, k):
taken = 1
last = positions[0]
for i = 1 ... positions.size() - 1:
if positions[i] - last >= dist:
taken++
last = positions[i]
return taken >= k
def solve(positions, k):
low = 0 // definitely small enough
high = maxElement(positions) - minElement(positions) + 1 // definitely too big
while high - low > 1:
mid = (low + high) / 2
if isFeasible(positions, mid, k):
low = mid
else:
high = mid
return low

Efficient algorithm for random sampling from a distribution while allowing updates?

This is the question I was asked some time ago on interview, I could not find answer for.
Given some samples S1, S2, ... Sn and their probability distributions(or weights, whatever it is called) P1, P2, .. Pn, design algorithm that randomly chooses sample taking into account its probability. the solution I came with is as follows:
Build cumulative array of weights Ci, such
C0 = 0;
Ci = C[i-1] + Pi.
at the same time calculate T=P1+P2+...Pn.
It takes O(n) time
Generate uniformly random number R = T*random[0..1]
Using binary search algorithm, return least i such Ci >= R.
result is Si. It takes O(logN) time.
Now the actual question is:
Suppose I want to change one of the initial Weights Pj. how to do this in better than O(n) time?
other data structures are acceptable, but random sampling algorithm should not get worse than O(logN).
One way to solve this is to rethink how your binary search tree containing the cumulative totals is built. Rather than building a binary search tree, think about having each node interpreted as follows:
Each node stores a range of values that are dedicated to the node itself.
Nodes in the left subtree represent sampling from the probability distribution just to the left of that range.
Nodes in the right subtree represent sampling from the probability distribution just to the right of that range.
For example, suppose our weights are 3, 2, 2, 2, 2, 1, and 1 for events A, B, C, D, E, F, and G. We build this binary tree holding A, B, C, D, E, F, and G:
D
/ \
B F
/ \ / \
A C E G
Now, we annotate the tree with probabilities. Since A, C, E, and G are all leaves, we give each of them probability mass one:
D
/ \
B F
/ \ / \
A C E G
1 1 1 1
Now, look at the tree for B. B has weight 2 of being chosen, A has weight 3 of being chosen, and C has probability 2 of being chosen. If we normalize these to the range [0, 1), then A accounts for 3/7 of the probability and B and C each account for 2/7s. Thus we have the node for B say that anything in the range [0, 3/7) goes to the left subtree, anything in the range [3/7, 5/7) maps to B, and anything in the range [5/7, 1) maps to the right subtree:
D
/ \
B F
[0, 3/7) / \ [5/7, 1) / \
A C E G
1 1 1 1
Similarly, let's process F. E has weight 2 of being chosen while F and G each have probability weight 1 of being chosen. Thus the subtree for E accounts for 1/2 of the probability mass here, the node F accounts for 1/4, and the subtree for G accounts for 1/4. This means we can assign probabilities as
D
/ \
B F
[0, 3/7) / \ [5/7, 1) [0, 1/2) / \ [3/4, 1)
A C E G
1 1 1 1
Finally, let's look at the root. The combined weight of the left subtree is 3 + 2 + 2 = 7. The combined weight of the right subtree is 2 + 1 + 1 = 4. The weight of D itself is 2. Thus the left subtree has probability 7/13 of being picked, D has probability 2/13 of being picked, and the right subtree has probability 4/13 of being picked. We can thus finalized the probabilities as
D
[0, 7/13) / \ [9/13, 1)
B F
[0, 3/7) / \ [5/7, 1) [0, 1/2) / \ [3/4, 1)
A C E G
1 1 1 1
To generate a random value, you would repeat the following:
Starting at the root:
Choose a uniformly-random value in the range [0, 1).
If it's in the range for the left subtree, descend into it.
If it's in the range for the right subtree, descend into it.
Otherwise, return the value corresponding to the current node.
The probabilities themselves can be determined recursively when the tree is built:
The left and right probabilities are 0 for any leaf node.
If an interior node itself has weight W, its left tree has total weight WL, and its right tree has total weight WR, then the left probability is (WL) / (W + WL + WR) and the right probability is (WR) / (W + WL + WR).
The reason that this reformulation is useful is that it gives us a way to update probabilities in O(log n) time per probability updated. In particular, let's think about what invariants are going to change if we update some particular node's weight. For simplicity, let's assume the node is a leaf for now. When we update the leaf node's weight, the probabilities are still correct for the leaf node, but they're incorrect for the node just above it, because the weight of one of that node's subtrees has changed. Thus we can (in O(1) time) recompute the probabilities for the parent node by just using the same formula as above. But then the parent of that node no longer has the correct values because one of its subtree weights has changed, so we can recompute the probability there as well. This process repeats all the way back up to the root of the tree, with us doing O(1) computation per level to rectify the weights assigned to each edge. Assuming that the tree is balanced, we therefore have to do O(log n) total work to update one probability. The logic is identical if the node isn't a leaf node; we just start somewhere in the tree.
In short, this gives
O(n) time to construct the tree (using a bottom-up approach),
O(log n) time to generate a random value, and
O(log n) time to update any one value.
Hope this helps!
Instead of an array, store the search structured as a balanced binary tree. Every node of the tree should store the total weight of the elements it contains. Depending on the value of R, the search procedure either returns the current node or searches through the left or right subtree.
When the weight of an element is changed, the updating of the search structure is a matter of adjusting the weights on the path from the element to the root of the tree.
Since the tree is balanced, the search and the weight update operations are both O(log N).
For those of you who would like some code, here's a python implementation:
import numpy
class DynamicProbDistribution(object):
""" Given a set of weighted items, randomly samples an item with probability
proportional to its weight. This class also supports fast modification of the
distribution, so that changing an item's weight requires O(log N) time.
Sampling requires O(log N) time. """
def __init__(self, weights):
self.num_weights = len(weights)
self.weights = numpy.empty((1+len(weights),), 'float32')
self.weights[0] = 0 # Not necessary but easier to read after printing
self.weights[1:] = weights
self.weight_tree = numpy.zeros((1+len(weights),), 'float32')
self.populate_weight_tree()
def populate_weight_tree(self):
""" The value of every node in the weight tree is equal to the sum of all
weights in the subtree rooted at that node. """
i = self.num_weights
while i > 0:
weight_sum = self.weights[i]
twoi = 2*i
if twoi < self.num_weights:
weight_sum += self.weight_tree[twoi] + self.weight_tree[twoi+1]
elif twoi == self.num_weights:
weight_sum += self.weights[twoi]
self.weight_tree[i] = weight_sum
i -= 1
def set_weight(self, item_idx, weight):
""" Changes the weight of the given item. """
i = item_idx + 1
self.weights[i] = weight
while i > 0:
weight_sum = self.weights[i]
twoi = 2*i
if twoi < self.num_weights:
weight_sum += self.weight_tree[twoi] + self.weight_tree[twoi+1]
elif twoi == self.num_weights:
weight_sum += self.weights[twoi]
self.weight_tree[i] = weight_sum
i /= 2 # Only need to modify the parents of this node
def sample(self):
""" Returns an item index sampled from the distribution. """
i = 1
while True:
twoi = 2*i
if twoi < self.num_weights:
# Two children
val = numpy.random.random() * self.weight_tree[i]
if val < self.weights[i]:
# all indices are offset by 1 for fast traversal of the
# internal binary tree
return i-1
elif val < self.weights[i] + self.weight_tree[twoi]:
i = twoi # descend into the subtree
else:
i = twoi + 1
elif twoi == self.num_weights:
# One child
val = numpy.random.random() * self.weight_tree[i]
if val < self.weights[i]:
return i-1
else:
i = twoi
else:
# No children
return i-1
def validate_distribution_results(dpd, weights, samples_per_item=1000):
import time
bins = numpy.zeros((len(weights),), 'float32')
num_samples = samples_per_item * numpy.sum(weights)
start = time.time()
for i in xrange(num_samples):
bins[dpd.sample()] += 1
duration = time.time() - start
bins *= numpy.sum(weights)
bins /= num_samples
print "Time to make %s samples: %s" % (num_samples, duration)
# These should be very close to each other
print "\nWeights:\n", weights
print "\nBins:\n", bins
sdev_tolerance = 10 # very unlikely to be exceeded
tolerance = float(sdev_tolerance) / numpy.sqrt(samples_per_item)
print "\nTolerance:\n", tolerance
error = numpy.abs(weights - bins)
print "\nError:\n", error
assert (error < tolerance).all()
##test
def test_DynamicProbDistribution():
# First test that the initial distribution generates valid samples.
weights = [2,5,4, 8,3,6, 6,1,3, 4,7,9]
dpd = DynamicProbDistribution(weights)
validate_distribution_results(dpd, weights)
# Now test that we can change the weights and still sample from the
# distribution.
print "\nChanging weights..."
dpd.set_weight(4, 10)
weights[4] = 10
dpd.set_weight(9, 2)
weights[9] = 2
dpd.set_weight(5, 4)
weights[5] = 4
dpd.set_weight(11, 3)
weights[11] = 3
validate_distribution_results(dpd, weights)
print "\nTest passed"
if __name__ == '__main__':
test_DynamicProbDistribution()
I've implemented a version related to Ken's code, but is balanced with a red/black tree for worst case O(log n) operations. This is available as weightedDict.py at: https://github.com/google/weighted-dict
(I would have added this as a comment to Ken's answer, but don't have the reputation to do that!)

Resources