Find all possible paths between N set of nodes - algorithm

I have N number of nodes at start and end where they can be paired as N number of sets (1 to 1, .., n to n, .., N to N). However, I have M stages (can be assumed as parallel stages) in between the start and end, and each stage has R number of nodes where R>=N.
If I consider start n node to end n node (i.e., n to n pair), I have to pass (M+1) hops to reach end node, and there are R^M possible paths. Thus, all possible paths for all pairs are N*R^M.
I weight each link as : the link between node i at stage m and node j at stage m+1 as w_{i,j}^{m,m+1}.
I want to write a MATLAB code to generate all possible paths each pair, i.e., N number of pairs. Can someone please help me?
I tried it only using exhaustive search just for 2 start and end nodes with 2 stages that have 3 nodes. But I don't know how to write this for a general network with effective way. Please help me !!!
Added: For example: N = M = 2, R = 3 I have R^M=2^3=9 possible paths for each pair. For both pairs, I have 18. For 1 to 1 pair, possible paths set is:
{1-1-1-1, 1-1-2-1,1-1-3-1
1-2-1-1, 1-2-2-1,1-2-3-1
1-3-1-1, 1-3-2-1,1-3-3-1}
and corresponding weights set is (0 represents start) :
{w_{1,1}^{0,1}-w_{1,1}^{1,2}-w_{1,1}^{2,3}; w_{1,1}^{0,1}-w_{1,2}^{1,2}-w_{2,1}^{2,3}; w_{1,1}^{0,1}-w_{1,3}^{1,2}-w_{3,1}^{2,3}, ........., w_{1,3}^{0,1}-w_{3,3}^{1,2}-w_{3,1}^{2,3}}
Same follows for the 2 to 2 pair.
Actually, my exhaustive search I generate each hop as matrix with randomly generated weights. From start to 1st hop: A=rand(2,3), then 1st hop to 2nd hop: B=rand(3,3), and 2nd to end: C=rand(3,2)

Okay, so per your update you just want to generate the Cartesian product
{i} X {1, 2, ..., R}^M X {j}
A quick-and-dirty approach would be to do something like this:
paths = zeros(R ^ M, M + 2); # initialize array
paths(:, 1) = i; # start all paths at node i
paths(:, M+2) = j; # end all paths at node j
for c = 1:M
for r = 1:(R ^ M)
paths(r, c+1) = mod(idivide(r-1, R ^ (c-1)), R)+1 ;
end
end
You could then loop through paths and calculate the weights.

Related

Number of ways to form a string from a matrix of characters with the optimal approach in terms of time complexity?

(UPDATED)
We need to find the number of ways a given string can be formed from a matrix of characters.
We can start forming the word from any position(i, j) in the matrix and can go in any unvisited direction from the 8 directions available across every cell(i, j) of the matrix, i.e
(i + 1, j)
(i + 1, j + 1)
(i + 1, j - 1)
(i - 1, j)
(i - 1, j + 1)
(i - 1, j - 1)
(i, j + 1)
(i, j - 1)
Sample test cases:
(1) input:
N = 3 (length of string)
string = "fit"
matrix: fitptoke
orliguek
ifefunef
tforitis
output: 7
(2) input:
N = 5 (length of string)
string = "pifit"
matrix: qiq
tpf
pip
rpr
output: 5
Explanation:
num of ways to make 'fit' are as given below:
(0,0)(0,1)(0,2)
(2,1)(2,0)(3,0)
(2,3)(1,3)(0,4)
(3,1)(2,0)(3,0)
(2,3)(3,4)(3,5)
(2,7)(3,6)(3,5)
(2,3)(1,3)(0,2)
I approach the solution as a naive way, go to every possible position (i,j) in the matrix and start forming the string from that cell (i, j) by performing DFS search on the matrix and add the number of ways to form the given string from that pos (i, j) to total_num_ways variable.
pseudocode:
W = 0
for i : 0 - n:
for j: 0 - m:
visited[n][m] = {false}
W += DFS(i, j, 0, str, matrix, visited);
But it turns out that this solution would be exponential in time complexity as we are going to every possible n * m position and then traversing to every possible k(length of the string) length path to form the string.
How can we improve the solution efficiency?
Suggestion - 1: Preprocessing the matrix and the input string
We are only concerned about a cell of the matrix if the character in the cell appears anywhere in the input string. So, we aren't concerned about a cell containing the alphabet 'z' if our input string is 'fit'.
Using that, following is a suggestion.
Taking the input string, first put its characters in a set S. It is an O(k) step, where k is the length of the string;
Next we iterate over the matrix (a O(m*n) step) and:
If the character in the cell does not appear in the S, we continue to the next one;
If the character in the cell appears, we add an entry of cell position in a map of > called M.
Now, iterating over the input (not the matrix), for each position where current char c appears, get the unvisited positions of the right, left, above and below of the current cell;
If any of these positions are present in the list of cells in M where the next character is present in the matrix, then:
Recursively go to the next character of the input string, until you have exhausted all the characters.
What is better in this solution? We are getting the next cell we need to explore in O(1) because it is already present in the map. As a result, the complexity is not exponential anymore, but it is actually O(c) where c is the total occurrences of the input string in the matrix.
Suggestion - 2: Dynamic Programming
DP helps in case where there is Optimal Substructure and Overlapping Subproblems. So, in situations where the same substring is a part of multiple solutions, using DP could help.
Ex: If we found 'fit' somewhere then if there is an 'f' in an adjacent cell, it could use the substring 'it' from the first 'fit' we found. This way we would prevent recursing down the rest of the string the moment we encounter a substring that was previously explored.
# Checking if the given (x,y) coordinates are within the boundaries
# of the matrix
def in_bounds(x, y, rows, cols):
return x >= 0 and x < rows and y >= 0 and y < cols
# Finding all possible moves from the current (x,y) position
def possible_moves(position, path_set, rows, cols):
moves = []
move_range = [-1,0,1]
for i in range(len(move_range)):
for j in range(len(move_range)):
x = position[0] + move_range[i]
y = position[1] + move_range[j]
if in_bounds(x,y,rows,cols):
if x in path_set:
if y in path_set[x]:
continue
moves.append((x,y))
return moves
# Deterimine which of the possible moves lead to the next letter
# of the goal string
def check_moves(goal_letter, candidates, search_space):
moves = []
for x, y in candidates:
if search_space[x][y] == goal_letter:
moves.append((x,y))
return moves
# Recursively expanding the paths of each starting coordinate
def search(goal, path, search_space, path_set, rows, cols):
# Base Case
if goal == '':
return [path]
x = path[-1][0]
y = path[-1][1]
if x in path_set:
path_set[x].add(y)
else:
path_set.update([(x,set([y]))])
results = []
moves = possible_moves(path[-1],path_set,rows,cols)
moves = check_moves(goal[0],moves,search_space)
for move in moves:
result = search(goal[1:], path + [move], search_space, path_set, rows, cols)
if result is not None:
results += result
return results
# Finding the coordinates in the matrix where the first letter from the goal
# string appears which is where all potential paths will begin from.
def find_paths(goal, search_space):
results = []
rows, cols = len(search_space), len(search_space[0])
# Finding starting coordinates for candidate paths
for i in range(len(search_space)):
for j in range(len(search_space[i])):
if search_space[i][j] == goal[0]:
# Expanding path from root letter
results += search(goal[1:],[(i,j)],search_space,dict(),rows,cols)
return results
goal = "fit"
matrix = [
'fitptoke',
'orliguek',
'ifefunef',
'tforitis'
]
paths = find_paths(goal, matrix)
for path in paths:
print(path)
print('# of paths:',len(paths))
Instead of expanding the paths from every coordinate of the matrix, the matrix can first be iterated over to find all the (i,j) coordinates that have the same letter as the first letter from the goal string. This takes O(n^2) time.
Then, for each (i,j) coordinate found which contained the first letter from the goal string, expand the paths from there by searching for the second letter from the goal string and expand only the paths that match the second letter. This action is repeated for each letter in the goal string to recursively find all valid paths from the starting coordinates.

Find algorithm : Reconstruct a sequence with the minimum length combination of disjointed subsequences chosen from a list of subsequences

I do not know if it’s appropriate to ask this question here so sorry if it is not.
I got a sequence ALPHA, for example :
A B D Z A B X
I got a list of subsequences of ALPHA, for example :
A B D
B D
A B
D Z
A
B
D
Z
X
I search an algorithm that find the minimum length of disjointed subsequences that reconstruct ALPHA, for example in our case :
{A B D} {Z} {A B} {X}
Any ideas? My guess is something already exists.
You can transform this problem into finding a minimum path in a graph.
The nodes will correspond to prefixes of the string, including one for the empty string. There will be an edge from a node A to a node B if there is an allowed sub-sequence that, when appended to the string prefix A, the result is the string prefix B.
The question is now transformed into finding the minimum path in the graph starting from the node corresponding to the empty string, and ending in the node corresponding to the entire input string.
You can now apply e.g. BFS (since the edges have uniform costs), or Dijkstra's algorithm to find this path.
The following python code is an implementation based on the principles above:
def reconstruct(seq, subseqs):
n = len(seq)
d = dict()
for subseq in subseqs:
d[subseq] = True
# in this solution, the node with value v will correspond
# to the substring seq[0: v]. Thus node 0 corresponds to the empty string
# and node n corresponds to the entire string
# this will keep track of the predecessor for each node
predecessors = [-1] * (n + 1)
reached = [False] * (n + 1)
reached[0] = True
# initialize the queue and add the first node
# (the node corresponding to the empty string)
q = []
qstart = 0
q.append(0)
while True:
# test if we already found a solution
if reached[n]:
break
# test if the queue is empty
if qstart > len(q):
break
# poll the first value from the queue
v = q[qstart]
qstart += 1
# try appending a subsequence to the current node
for n2 in range (1, n - v + 1):
# the destination node was already added into the queue
if reached[v + n2]:
continue
if seq[v: (v + n2)] in d:
q.append(v + n2)
predecessors[v + n2] = v
reached[v + n2] = True
if not reached[n]:
return []
# reconstruct the path, starting from the last node
pos = n
solution = []
while pos > 0:
solution.append(seq[predecessors[pos]: pos])
pos = predecessors[pos]
solution.reverse()
return solution
print reconstruct("ABDZABX", ["ABD", "BD", "AB", "DZ", "A", "B", "D", "Z", "X"])
I don't have much experience with python, that's the main reason why I preferred to stick to the basics (e.g. implementing a queue with a list + an index to the start).

Algorithm to group items in groups of 3

I am trying to solve a problem where I have pairs like:
A C
B F
A D
D C
F E
E B
A B
B C
E D
F D
and I need to group them in groups of 3 where I must have a triangule of matching from that list. Basically I need a result if its possible or not to group a collection.
So the possible groups are (ACD and BFE), or (ABC and DEF) and this collection is groupable since all letters can be grouped in groups of 3 and no one is left out.
I made a script where I can achieve this for small ammounts of input but for big ammounts it gets too slow.
My logic is:
make nested loop to find first match (looping untill I find a match)
> remove 3 elements from the collection
> run again
and I do this until I am out of letters. Since there can be different combinations I run this multiple times starting on different letters until I find a match.
I can understand that this gives me loops in order at least N^N and can get too slow. Is there a better logic for such problems? can a binary tree be used here?
This problem can be modeled as a graph Clique cover problem. Every letter is a node and every pair is an edge and you want to partition the graph into vertex-disjoint cliques of size 3 (triangles). If you want the partitioning to be of minimum cardinality then you want a minimum clique cover.
Actually this would be a k-clique cover problem, because in the clique cover problem you can have cliques of arbitrary/different sizes.
As Alberto Rivelli already stated, this problem is reducible to the Clique Cover problem, which is NP-hard.
It is also reducible to the problem of finding a clique of particular/maximum size. Maybe there are others, not NP-hard problems to which your particular case could be reduced to, but I didn't think of any.
However, there do exist algorithms which can find the solution in polynomial time, although not always for worst cases. One of them is Bron–Kerbosch algorithm, which is known by far to be the most efficient algorithm for finding the maximum clique and can find a clique in the worst case of O(3^(n/3)). I don't know the size of your inputs, but I hope it will be sufficient for your problem.
Here is the code in Python, ready to go:
#!/usr/bin/python3
# #by DeFazer
# Solution to:
# stackoverflow.com/questions/40193648/algorithm-to-group-items-in-groups-of-3
# Input:
# N P - number of vertices and number of pairs
# P pairs, 1 pair per line
# Output:
# "YES" and groups themselves if grouping is possible, and "NO" otherwise
# Input example:
# 6 10
# 1 3
# 2 6
# 1 4
# 4 3
# 6 5
# 5 2
# 1 2
# 2 3
# 5 4
# 6 4
# Output example:
# YES
# 1-2-3
# 4-5-6
# Output commentary:
# There are 2 possible coverages: 1-2-3*4-5-6 and 2-5-6*1-3-4.
# If required, it can be easily modified to return all possible groupings rather than just one.
# Algorithm:
# 1) List *all* existing triangles (1-2-3, 1-3-4, 2-5-6...)
# 2) Build a graph where vertices represent triangles and edges connect these triangles with no common... vertices. Sorry for ambiguity. :)
# 3) Use [this](en.wikipedia.org/wiki/Bron–Kerbosch_algorithm) algorithm (slightly modified) to find a clique of size N/3.
# The grouping is possible if such clique exists.
N, P = map(int, input().split())
assert (N%3 == 0) and (N>0)
cliquelength = N//3
pairs = {} # {a:{b, d, c}, b:{a, c, f}, c:{a, b}...}
# Get input
# [(0, 1), (1, 3), (3, 2)...]
##pairlist = list(map(lambda ab: tuple(map(lambda a: int(a)-1, ab)), (input().split() for pair in range(P))))
pairlist=[]
for pair in range(P):
a, b = map(int, input().split())
if a>b:
b, a = a, b
a, b = a-1, b-1
pairlist.append((a, b))
pairlist.sort()
for pair in pairlist:
a, b = pair
if a not in pairs:
pairs[a] = set()
pairs[a].add(b)
# Make list of triangles
triangles = []
for a in range(N-2):
for b in pairs.get(a, []):
for c in pairs.get(b, []):
if c in pairs[a]:
triangles.append((a, b, c))
break
def no_mutual_elements(sortedtupleA, sortedtupleB):
# Utility function
# TODO: if too slow, can be improved to O(n) since tuples are sorted. However, there are only 9 comparsions in case of triangles.
return all((a not in sortedtupleB) for a in sortedtupleA)
# Make a graph out of that list
tgraph = [] # if a<b and (b in tgraph[a]), then triangles[a] has no common elements with triangles[b]
T = len(triangles)
for t1 in range(T):
s = set()
for t2 in range(t1+1, T):
if no_mutual_elements(triangles[t1], triangles[t2]):
s.add(t2)
tgraph.append(s)
def connected(a, b):
if a > b:
b, a = a, b
return (b in tgraph[a])
# Finally, the magic algorithm!
CSUB = set()
def extend(CAND:set, NOT:set) -> bool:
# while CAND is not empty and there is no vertex in NOT connected to *all* vertexes in CAND
while CAND and all((any(not connected(n, c) for c in CAND)) for n in NOT):
v = CAND.pop()
CSUB.add(v)
newCAND = {c for c in CAND if connected(c, v)}
newNOT = {n for n in NOT if connected(n, v)}
if (not newCAND) and (not newNOT) and (len(CSUB)==cliquelength): # the last condition is the algorithm modification
return True
elif extend(newCAND, newNOT):
return True
else:
CSUB.remove(v)
NOT.add(v)
if extend(set(range(T)), set()):
print("YES")
# If the clique itself is not needed, it's enough to remove the following 2 lines
for a, b, c in [triangles[c] for c in CSUB]:
print("{}-{}-{}".format(a+1, b+1, c+1))
else:
print("NO")
If this solution is still too slow, perphaps it may be more efficient to solve the Clique Cover problem instead. If that's the case, I can try to find a proper algorithm for it.
Hope that helps!
Well i have implemented the job in JS where I feel most confident. I also tried with 100000 edges which are randomly selected from 26 letters. Provided that they are all unique and not a point such as ["A",A"] it resolves in like 90~500 msecs. The most convoluted part was to obtain the nonidentical groups, those without just the order of the triangles changing. For the given edges data it resolves within 1 msecs.
As a summary the first reduce stage finds the triangles and the second reduce stage groups the disconnected ones.
function getDisconnectedTriangles(edges){
return edges.reduce(function(p,e,i,a){
var ce = a.slice(i+1)
.filter(f => f.some(n => e.includes(n))), // connected edges
re = []; // resulting edges
if (ce.length > 1){
re = ce.reduce(function(r,v,j,b){
var xv = v.find(n => e.indexOf(n) === -1), // find the external vertex
xe = b.slice(j+1) // find the external edges
.filter(f => f.indexOf(xv) !== -1 );
return xe.length ? (r.push([...new Set(e.concat(v,xe[0]))]),r) : r;
},[]);
}
return re.length ? p.concat(re) : p;
},[])
.reduce((s,t,i,a) => t.used ? s
: (s.push(a.map((_,j) => a[(i+j)%a.length])
.reduce((p,c,k) => k-1 ? p.every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, p.push(c),p) : p
: [p].every(t => t.every(n => c.every(v => n !== v))) ? (c.used = true, [p,c]) : [p])),s)
,[]);
}
var edges = [["A","C"],["B","F"],["A","D"],["D","C"],["F","E"],["E","B"],["A","B"],["B","C"],["E","D"],["F","D"]],
ps = 0,
pe = 0,
result = [];
ps = performance.now();
result = getDisconnectedTriangles(edges);
pe = performance.now();
console.log("Disconnected triangles are calculated in",pe-ps, "msecs and the result is:");
console.log(result);
You may generate random edges in different lengths and play with the code here

How to get the target number with +3 or *5 operations without recursion?

This is an interview problem I came across yesterday, I can think of a recursive solution but I wanna know if there's a non-recursive solution.
Given a number N, starting with number 1, you can only multiply the result by 5 or add 3 to the result. If there's no way to get N through this method, return "Can't generate it".
Ex:
Input: 23
Output: (1+3)*5+3
Input: 215
Output: ((1*5+3)*5+3)*5
Input: 12
Output: Can't generate it.
The recursive method can be obvious and intuitive, but are there any non-recursive methods?
I think the quickest, non recursive solution is (for N > 2):
if N mod 3 == 1, it can be generated as 1 + 3*k.
if N mod 3 == 2, it can be generated as 1*5 + 3*k
if N mod 3 == 0, it cannot be generated
The last statement comes from the fact that starting with 1 (= 1 mod 3) you can only reach numbers which are equals to 1 or 2 mod 3:
when you add 3, you don't change the value mod 3
a number equals to 1 mod 3 multiplied by 5 gives a number equals to 2 mod 3
a number equals to 2 mod 3 multiplied by 5 gives a number equals to 1 mod 3
The key here is to work backwards. Start with the number you want to reach and if it's divisible by 5 then divide by 5 because multiplication by 5 results in a shorter solution than addition by 3. The only exceptions are if the value equals 10, because dividing by 5 would yield 2 which is insolvable. If the number is not divisible by 5 or is equal to 10, subtract 3. This produces the shortest string
Repeat until you reach 1
Here is python code:
def f(x):
if x%3 == 0 or x==2:
return "Can't generate it"
l = []
while x!=1:
if x%5 != 0 or x==10:
l.append(3)
x -= 3
else:
l.append(5)
x /=5
l.reverse()
s = '1'
for v in l:
if v == 3:
s += ' + 3'
else:
s = '(' + s + ')*5'
return s
Credit to the previous solutions for determining whether a given number is possible
Model the problem as a graph:
Nodes are numbers
Your root node is 1
Links between nodes are *5 or +3.
Then run Dijkstra's algorithm to get the shortest path. If you exhaust all links from nodes <N without getting to N then you can't generate N. (Alternatively, use #obourgain's answer to decide in advance whether the problem can be solved, and only attempt to work out how to solve the problem if it can be solved.)
So essentially, you enqueue the node (1, null path). You need a dictionary storing {node(i.e. number) => best path found so far for that node}. Then, so long as the queue isn't empty, in each pass of the loop you
Dequeue the head (node,path) from the queue.
If the number of this node is >N, or you've already seen this node before with fewer steps in the path, then don't do any more on this pass.
Add (node => path) to the dictionary.
Enqueue nodes reachable from this node with *5 and +3 (together with the paths that get you to those nodes)
When the loop terminates, look up N in the dictionary to get the path, or output "Can't generate it".
Edit: note, this is really Breadth-first search rather than Dijkstra's algorithm, as the cost of traversing a link is fixed at 1.
You can use the following recursion (which is indeed intuitive):
f(input) = f(input/5) OR f(input -3)
base:
f(1) = true
f(x) = false x is not natural positive number
Note that it can be done using Dynamic Programming as well:
f[-2] = f[-1] = f[0] = false
f[1] = true
for i from 2 to n:
f[i] = f[i-3] or (i%5 == 0? f[i/5] : false)
To get the score, you need to get on the table after building it from f[n] and follow the valid true moves.
Time and space complexity of the DP solution is O(n) [pseudo-polynomial]
All recursive algorithms can also be implemented using a stack. So, something like this:
bool canProduce(int target){
Stack<int> numStack;
int current;
numStack.push(1);
while(!numStack.empty){
current=numStack.top();
numStack.pop();
if(current==target)
return true;
if(current+3 < target)
numStack.push(current+3);
if(current*5 < target)
numStack.push(current*5);
}
return false;
}
In Python:
The smart solution:
def f(n):
if n % 3 == 1:
print '1' + '+3' * (n // 3)
elif n % 3 == 2:
print '1*5' + '+3' * ((n - 5) // 3)
else:
print "Can't generate it."
A naive but still O(n) version:
def f(n):
d={1:'1'}
for i in range(n):
if i in d:
d[i*5] = '(' + d[i] + ')*5'
d[i+3] = d[i] + '+3'
if n in d:
print d[n]
else:
print "Can't generate it."
And of course, you could also use a stack to reproduce the behavior of the recursive calls.
Which gives:
>>> f(23)
(1)*5+3+3+3+3+3+3
>>> f(215)
(1)*5+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3+3
>>> f(12)
Can't generate it.

Select k random elements from a list whose elements have weights

Selecting without any weights (equal probabilities) is beautifully described here.
I was wondering if there is a way to convert this approach to a weighted one.
I am also interested in other approaches as well.
Update: Sampling without replacement
If the sampling is with replacement, you can use this algorithm (implemented here in Python):
import random
items = [(10, "low"),
(100, "mid"),
(890, "large")]
def weighted_sample(items, n):
total = float(sum(w for w, v in items))
i = 0
w, v = items[0]
while n:
x = total * (1 - random.random() ** (1.0 / n))
total -= x
while x > w:
x -= w
i += 1
w, v = items[i]
w -= x
yield v
n -= 1
This is O(n + m) where m is the number of items.
Why does this work? It is based on the following algorithm:
def n_random_numbers_decreasing(v, n):
"""Like reversed(sorted(v * random() for i in range(n))),
but faster because we avoid sorting."""
while n:
v *= random.random() ** (1.0 / n)
yield v
n -= 1
The function weighted_sample is just this algorithm fused with a walk of the items list to pick out the items selected by those random numbers.
This in turn works because the probability that n random numbers 0..v will all happen to be less than z is P = (z/v)n. Solve for z, and you get z = vP1/n. Substituting a random number for P picks the largest number with the correct distribution; and we can just repeat the process to select all the other numbers.
If the sampling is without replacement, you can put all the items into a binary heap, where each node caches the total of the weights of all items in that subheap. Building the heap is O(m). Selecting a random item from the heap, respecting the weights, is O(log m). Removing that item and updating the cached totals is also O(log m). So you can pick n items in O(m + n log m) time.
(Note: "weight" here means that every time an element is selected, the remaining possibilities are chosen with probability proportional to their weights. It does not mean that elements appear in the output with a likelihood proportional to their weights.)
Here's an implementation of that, plentifully commented:
import random
class Node:
# Each node in the heap has a weight, value, and total weight.
# The total weight, self.tw, is self.w plus the weight of any children.
__slots__ = ['w', 'v', 'tw']
def __init__(self, w, v, tw):
self.w, self.v, self.tw = w, v, tw
def rws_heap(items):
# h is the heap. It's like a binary tree that lives in an array.
# It has a Node for each pair in `items`. h[1] is the root. Each
# other Node h[i] has a parent at h[i>>1]. Each node has up to 2
# children, h[i<<1] and h[(i<<1)+1]. To get this nice simple
# arithmetic, we have to leave h[0] vacant.
h = [None] # leave h[0] vacant
for w, v in items:
h.append(Node(w, v, w))
for i in range(len(h) - 1, 1, -1): # total up the tws
h[i>>1].tw += h[i].tw # add h[i]'s total to its parent
return h
def rws_heap_pop(h):
gas = h[1].tw * random.random() # start with a random amount of gas
i = 1 # start driving at the root
while gas >= h[i].w: # while we have enough gas to get past node i:
gas -= h[i].w # drive past node i
i <<= 1 # move to first child
if gas >= h[i].tw: # if we have enough gas:
gas -= h[i].tw # drive past first child and descendants
i += 1 # move to second child
w = h[i].w # out of gas! h[i] is the selected node.
v = h[i].v
h[i].w = 0 # make sure this node isn't chosen again
while i: # fix up total weights
h[i].tw -= w
i >>= 1
return v
def random_weighted_sample_no_replacement(items, n):
heap = rws_heap(items) # just make a heap...
for i in range(n):
yield rws_heap_pop(heap) # and pop n items off it.
If the sampling is with replacement, use the roulette-wheel selection technique (often used in genetic algorithms):
sort the weights
compute the cumulative weights
pick a random number in [0,1]*totalWeight
find the interval in which this number falls into
select the elements with the corresponding interval
repeat k times
If the sampling is without replacement, you can adapt the above technique by removing the selected element from the list after each iteration, then re-normalizing the weights so that their sum is 1 (valid probability distribution function)
I know this is a very old question, but I think there's a neat trick to do this in O(n) time if you apply a little math!
The exponential distribution has two very useful properties.
Given n samples from different exponential distributions with different rate parameters, the probability that a given sample is the minimum is equal to its rate parameter divided by the sum of all rate parameters.
It is "memoryless". So if you already know the minimum, then the probability that any of the remaining elements is the 2nd-to-min is the same as the probability that if the true min were removed (and never generated), that element would have been the new min. This seems obvious, but I think because of some conditional probability issues, it might not be true of other distributions.
Using fact 1, we know that choosing a single element can be done by generating these exponential distribution samples with rate parameter equal to the weight, and then choosing the one with minimum value.
Using fact 2, we know that we don't have to re-generate the exponential samples. Instead, just generate one for each element, and take the k elements with lowest samples.
Finding the lowest k can be done in O(n). Use the Quickselect algorithm to find the k-th element, then simply take another pass through all elements and output all lower than the k-th.
A useful note: if you don't have immediate access to a library to generate exponential distribution samples, it can be easily done by: -ln(rand())/weight
I've done this in Ruby
https://github.com/fl00r/pickup
require 'pickup'
pond = {
"selmon" => 1,
"carp" => 4,
"crucian" => 3,
"herring" => 6,
"sturgeon" => 8,
"gudgeon" => 10,
"minnow" => 20
}
pickup = Pickup.new(pond, uniq: true)
pickup.pick(3)
#=> [ "gudgeon", "herring", "minnow" ]
pickup.pick
#=> "herring"
pickup.pick
#=> "gudgeon"
pickup.pick
#=> "sturgeon"
If you want to generate large arrays of random integers with replacement, you can use piecewise linear interpolation. For example, using NumPy/SciPy:
import numpy
import scipy.interpolate
def weighted_randint(weights, size=None):
"""Given an n-element vector of weights, randomly sample
integers up to n with probabilities proportional to weights"""
n = weights.size
# normalize so that the weights sum to unity
weights = weights / numpy.linalg.norm(weights, 1)
# cumulative sum of weights
cumulative_weights = weights.cumsum()
# piecewise-linear interpolating function whose domain is
# the unit interval and whose range is the integers up to n
f = scipy.interpolate.interp1d(
numpy.hstack((0.0, weights)),
numpy.arange(n + 1), kind='linear')
return f(numpy.random.random(size=size)).astype(int)
This is not effective if you want to sample without replacement.
Here's a Go implementation from geodns:
package foo
import (
"log"
"math/rand"
)
type server struct {
Weight int
data interface{}
}
func foo(servers []server) {
// servers list is already sorted by the Weight attribute
// number of items to pick
max := 4
result := make([]server, max)
sum := 0
for _, r := range servers {
sum += r.Weight
}
for si := 0; si < max; si++ {
n := rand.Intn(sum + 1)
s := 0
for i := range servers {
s += int(servers[i].Weight)
if s >= n {
log.Println("Picked record", i, servers[i])
sum -= servers[i].Weight
result[si] = servers[i]
// remove the server from the list
servers = append(servers[:i], servers[i+1:]...)
break
}
}
}
return result
}
If you want to pick x elements from a weighted set without replacement such that elements are chosen with a probability proportional to their weights:
import random
def weighted_choose_subset(weighted_set, count):
"""Return a random sample of count elements from a weighted set.
weighted_set should be a sequence of tuples of the form
(item, weight), for example: [('a', 1), ('b', 2), ('c', 3)]
Each element from weighted_set shows up at most once in the
result, and the relative likelihood of two particular elements
showing up is equal to the ratio of their weights.
This works as follows:
1.) Line up the items along the number line from [0, the sum
of all weights) such that each item occupies a segment of
length equal to its weight.
2.) Randomly pick a number "start" in the range [0, total
weight / count).
3.) Find all the points "start + n/count" (for all integers n
such that the point is within our segments) and yield the set
containing the items marked by those points.
Note that this implementation may not return each possible
subset. For example, with the input ([('a': 1), ('b': 1),
('c': 1), ('d': 1)], 2), it may only produce the sets ['a',
'c'] and ['b', 'd'], but it will do so such that the weights
are respected.
This implementation only works for nonnegative integral
weights. The highest weight in the input set must be less
than the total weight divided by the count; otherwise it would
be impossible to respect the weights while never returning
that element more than once per invocation.
"""
if count == 0:
return []
total_weight = 0
max_weight = 0
borders = []
for item, weight in weighted_set:
if weight < 0:
raise RuntimeError("All weights must be positive integers")
# Scale up weights so dividing total_weight / count doesn't truncate:
weight *= count
total_weight += weight
borders.append(total_weight)
max_weight = max(max_weight, weight)
step = int(total_weight / count)
if max_weight > step:
raise RuntimeError(
"Each weight must be less than total weight / count")
next_stop = random.randint(0, step - 1)
results = []
current = 0
for i in range(count):
while borders[current] <= next_stop:
current += 1
results.append(weighted_set[current][0])
next_stop += step
return results
In the question you linked to, Kyle's solution would work with a trivial generalization.
Scan the list and sum the total weights. Then the probability to choose an element should be:
1 - (1 - (#needed/(weight left)))/(weight at n). After visiting a node, subtract it's weight from the total. Also, if you need n and have n left, you have to stop explicitly.
You can check that with everything having weight 1, this simplifies to kyle's solution.
Edited: (had to rethink what twice as likely meant)
This one does exactly that with O(n) and no excess memory usage. I believe this is a clever and efficient solution easy to port to any language. The first two lines are just to populate sample data in Drupal.
function getNrandomGuysWithWeight($numitems){
$q = db_query('SELECT id, weight FROM theTableWithTheData');
$q = $q->fetchAll();
$accum = 0;
foreach($q as $r){
$accum += $r->weight;
$r->weight = $accum;
}
$out = array();
while(count($out) < $numitems && count($q)){
$n = rand(0,$accum);
$lessaccum = NULL;
$prevaccum = 0;
$idxrm = 0;
foreach($q as $i=>$r){
if(($lessaccum == NULL) && ($n <= $r->weight)){
$out[] = $r->id;
$lessaccum = $r->weight- $prevaccum;
$accum -= $lessaccum;
$idxrm = $i;
}else if($lessaccum){
$r->weight -= $lessaccum;
}
$prevaccum = $r->weight;
}
unset($q[$idxrm]);
}
return $out;
}
I putting here a simple solution for picking 1 item, you can easily expand it for k items (Java style):
double random = Math.random();
double sum = 0;
for (int i = 0; i < items.length; i++) {
val = items[i];
sum += val.getValue();
if (sum > random) {
selected = val;
break;
}
}
I have implemented an algorithm similar to Jason Orendorff's idea in Rust here. My version additionally supports bulk operations: insert and remove (when you want to remove a bunch of items given by their ids, not through the weighted selection path) from the data structure in O(m + log n) time where m is the number of items to remove and n the number of items in stored.
Sampling wihout replacement with recursion - elegant and very short solution in c#
//how many ways we can choose 4 out of 60 students, so that every time we choose different 4
class Program
{
static void Main(string[] args)
{
int group = 60;
int studentsToChoose = 4;
Console.WriteLine(FindNumberOfStudents(studentsToChoose, group));
}
private static int FindNumberOfStudents(int studentsToChoose, int group)
{
if (studentsToChoose == group || studentsToChoose == 0)
return 1;
return FindNumberOfStudents(studentsToChoose, group - 1) + FindNumberOfStudents(studentsToChoose - 1, group - 1);
}
}
I just spent a few hours trying to get behind the algorithms underlying sampling without replacement out there and this topic is more complex than I initially thought. That's exciting! For the benefit of a future readers (have a good day!) I document my insights here including a ready to use function which respects the given inclusion probabilities further below. A nice and quick mathematical overview of the various methods can be found here: Tillé: Algorithms of sampling with equal or unequal probabilities. For example Jason's method can be found on page 46. The caveat with his method is that the weights are not proportional to the inclusion probabilities as also noted in the document. Actually, the i-th inclusion probabilities can be recursively computed as follows:
def inclusion_probability(i, weights, k):
"""
Computes the inclusion probability of the i-th element
in a randomly sampled k-tuple using Jason's algorithm
(see https://stackoverflow.com/a/2149533/7729124)
"""
if k <= 0: return 0
cum_p = 0
for j, weight in enumerate(weights):
# compute the probability of j being selected considering the weights
p = weight / sum(weights)
if i == j:
# if this is the target element, we don't have to go deeper,
# since we know that i is included
cum_p += p
else:
# if this is not the target element, than we compute the conditional
# inclusion probability of i under the constraint that j is included
cond_i = i if i < j else i-1
cond_weights = weights[:j] + weights[j+1:]
cond_p = inclusion_probability(cond_i, cond_weights, k-1)
cum_p += p * cond_p
return cum_p
And we can check the validity of the function above by comparing
In : for i in range(3): print(i, inclusion_probability(i, [1,2,3], 2))
0 0.41666666666666663
1 0.7333333333333333
2 0.85
to
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: random_weighted_sample_no_replacement([(1,'a'),(2,'b'),(3,'c')],2))
Out: Counter({'a': 4198, 'b': 7268, 'c': 8534})
One way - also suggested in the document above - to specify the inclusion probabilities is to compute the weights from them. The whole complexity of the question at hand stems from the fact that one cannot do that directly since one basically has to invert the recursion formula, symbolically I claim this is impossible. Numerically it can be done using all kind of methods, e.g. Newton's method. However the complexity of inverting the Jacobian using plain Python becomes unbearable quickly, I really recommend looking into numpy.random.choice in this case.
Luckily there is method using plain Python which might or might not be sufficiently performant for your purposes, it works great if there aren't that many different weights. You can find the algorithm on page 75&76. It works by splitting up the sampling process into parts with the same inclusion probabilities, i.e. we can use random.sample again! I am not going to explain the principle here since the basics are nicely presented on page 69. Here is the code with hopefully a sufficient amount of comments:
def sample_no_replacement_exact(items, k, best_effort=False, random_=None, ε=1e-9):
"""
Returns a random sample of k elements from items, where items is a list of
tuples (weight, element). The inclusion probability of an element in the
final sample is given by
k * weight / sum(weights).
Note that the function raises if a inclusion probability cannot be
satisfied, e.g the following call is obviously illegal:
sample_no_replacement_exact([(1,'a'),(2,'b')],2)
Since selecting two elements means selecting both all the time,
'b' cannot be selected twice as often as 'a'. In general it can be hard to
spot if the weights are illegal and the function does *not* always raise
an exception in that case. To remedy the situation you can pass
best_effort=True which redistributes the inclusion probability mass
if necessary. Note that the inclusion probabilities will change
if deemed necessary.
The algorithm is based on the splitting procedure on page 75/76 in:
http://www.eustat.eus/productosServicios/52.1_Unequal_prob_sampling.pdf
Additional information can be found here:
https://stackoverflow.com/questions/2140787/
:param items: list of tuples of type weight,element
:param k: length of resulting sample
:param best_effort: fix inclusion probabilities if necessary,
(optional, defaults to False)
:param random_: random module to use (optional, defaults to the
standard random module)
:param ε: fuzziness parameter when testing for zero in the context
of floating point arithmetic (optional, defaults to 1e-9)
:return: random sample set of size k
:exception: throws ValueError in case of bad parameters,
throws AssertionError in case of algorithmic impossibilities
"""
# random_ defaults to the random submodule
if not random_:
random_ = random
# special case empty return set
if k <= 0:
return set()
if k > len(items):
raise ValueError("resulting tuple length exceeds number of elements (k > n)")
# sort items by weight
items = sorted(items, key=lambda item: item[0])
# extract the weights and elements
weights, elements = list(zip(*items))
# compute the inclusion probabilities (short: π) of the elements
scaling_factor = k / sum(weights)
π = [scaling_factor * weight for weight in weights]
# in case of best_effort: if a inclusion probability exceeds 1,
# try to rebalance the probabilities such that:
# a) no probability exceeds 1,
# b) the probabilities still sum to k, and
# c) the probability masses flow from top to bottom:
# [0.2, 0.3, 1.5] -> [0.2, 0.8, 1]
# (remember that π is sorted)
if best_effort and π[-1] > 1 + ε:
# probability mass we still we have to distribute
debt = 0.
for i in reversed(range(len(π))):
if π[i] > 1.:
# an 'offender', take away excess
debt += π[i] - 1.
π[i] = 1.
else:
# case π[i] < 1, i.e. 'save' element
# maximum we can transfer from debt to π[i] and still not
# exceed 1 is computed by the minimum of:
# a) 1 - π[i], and
# b) debt
max_transfer = min(debt, 1. - π[i])
debt -= max_transfer
π[i] += max_transfer
assert debt < ε, "best effort rebalancing failed (impossible)"
# make sure we are talking about probabilities
if any(not (0 - ε <= π_i <= 1 + ε) for π_i in π):
raise ValueError("inclusion probabilities not satisfiable: {}" \
.format(list(zip(π, elements))))
# special case equal probabilities
# (up to fuzziness parameter, remember that π is sorted)
if π[-1] < π[0] + ε:
return set(random_.sample(elements, k))
# compute the two possible lambda values, see formula 7 on page 75
# (remember that π is sorted)
λ1 = π[0] * len(π) / k
λ2 = (1 - π[-1]) * len(π) / (len(π) - k)
λ = min(λ1, λ2)
# there are two cases now, see also page 69
# CASE 1
# with probability λ we are in the equal probability case
# where all elements have the same inclusion probability
if random_.random() < λ:
return set(random_.sample(elements, k))
# CASE 2:
# with probability 1-λ we are in the case of a new sample without
# replacement problem which is strictly simpler,
# it has the following new probabilities (see page 75, π^{(2)}):
new_π = [
(π_i - λ * k / len(π))
/
(1 - λ)
for π_i in π
]
new_items = list(zip(new_π, elements))
# the first few probabilities might be 0, remove them
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
while new_items and new_items[0][0] < ε:
new_items = new_items[1:]
# the last few probabilities might be 1, remove them and mark them as selected
# NOTE: we make sure that floating point issues do not arise
# by using the fuzziness parameter
selected_elements = set()
while new_items and new_items[-1][0] > 1 - ε:
selected_elements.add(new_items[-1][1])
new_items = new_items[:-1]
# the algorithm reduces the length of the sample problem,
# it is guaranteed that:
# if λ = λ1: the first item has probability 0
# if λ = λ2: the last item has probability 1
assert len(new_items) < len(items), "problem was not simplified (impossible)"
# recursive call with the simpler sample problem
# NOTE: we have to make sure that the selected elements are included
return sample_no_replacement_exact(
new_items,
k - len(selected_elements),
best_effort=best_effort,
random_=random_,
ε=ε
) | selected_elements
Example:
In : sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c')],2)
Out: {'b', 'c'}
In : import collections, itertools
In : sample_tester = lambda f: collections.Counter(itertools.chain(*(f() for _ in range(10000))))
In : sample_tester(lambda: sample_no_replacement_exact([(1,'a'),(2,'b'),(3,'c'),(4,'d')],2))
Out: Counter({'a': 2048, 'b': 4051, 'c': 5979, 'd': 7922})
The weights sum up to 10, hence the inclusion probabilities compute to: a → 20%, b → 40%, c → 60%, d → 80%. (Sum: 200% = k.) It works!
Just one word of caution for the productive use of this function, it can be very hard to spot illegal inputs for the weights. An obvious illegal example is
In: sample_no_replacement_exact([(1,'a'),(2,'b')],2)
ValueError: inclusion probabilities not satisfiable: [(0.6666666666666666, 'a'), (1.3333333333333333, 'b')]
b cannot appear twice as often as a since both have to be always be selected. There are more subtle examples. To avoid an exception in production just use best_effort=True, which rebalances the inclusion probability mass such that we have always a valid distribution. Obviously this might change the inclusion probabilities.
I used a associative map (weight,object). for example:
{
(10,"low"),
(100,"mid"),
(10000,"large")
}
total=10110
peek a random number between 0 and 'total' and iterate over the keys until this number fits in a given range.

Resources