Distributed Memory Top-K Algorithm for Large K - algorithm

I have a distributed array of rankings, of total size N irregularly distributed amongst NP processors, from which I need to extract the K largest elements. In the limit that K << N, K is smaller than any of the local buffer lengths, and K is relatively small in general (such that it can e.g. fit in reasonable MPI buffers), the following algorithm seems to work well
Perform a local top-K search to determine the largest K values in each local array segment
Perform a custom Allreduce which performs binary top-K reductions between buffers of size K coming from different processes.
This can be done in a semi-communication-optimal way given the communication patterns underlying MPI_Allreduce.
I'm unclear how this can be done efficiently without the above assumptions about the size of K relative to N and the local buffer sizes. In particular, I'm trying to determine an optimal (or reasonably scaling) algorithm that is compatible with the following:
K can be larger than some or all of the local buffer dimensions
K can be so large is to be impractical to communicate entirely (e.g. trying to determine the top billion elements of a 10 billon element array)
The full array nor the top-K elements need to be sorted on completion.
For arrays which reside on a single processing element, the following questions are related:
Collect top K elements from multiple sorted arrays
Average time complexity of finding top-k elements
Optimal algorithm for returning top k values from an array of length N

The key here is to find the top-K repeatedly, where each time you make K the largest value satisfying the original assumption.
Assuming you'd want to get top-K' where K' is greater than the local buffer and MPI communication sizes. You could do the following to eventually find top-K':
Find top-K where K is the greatest value that fits inside local buffer and can be communicated with MPI.
Append the K elements to the top array A, and remove them from the local arrays.
Go back to #1 above until size(A) == K'.
The resulting array A should contain the top-K' elements.

Related

QuickSelect - Print k smallest elements of array A of size n in O(n) time and O(5k) auxiliary space - read only once

I've been trying to solve this problem:
The Select algorithm allows us to find in a given array A the value of the ith index in linear time (O(n)), but requires us to keep A in memory throughout the entire algorithm.
Suggest an algorithm which receives an array of size n, named A, which contains natural numbers, and prints the k smallest elements of A with the following restrictions:
For each i=1...n, you are only allowed to read the value A[i] once. You are not allowed to write into, or exchange between elements of A.
You are allowed to use a second array of size 5k, named B, which can be written and read without restrictions.
Run-time must be linear, by size of A. Assume k<n, and that 5k<n.
I realized that I need to utilize a Median of Medians approach, but I'm having a hard time thinking about how the pivot would be calculated, as I an only store 5k elements.
This would mean that I cannot calculate the Median of Medians which would make the best 70% 30% pivot choice, and won't reach a linear run-time.
I would appreciate any input in the matter.
Thanks!
Start copying from A into an auxiliary array. Whenever you collect 2k elements, use quickselect to keep the smallest k elements and discard the rest.
Finally call quickselect once more to discard all but the smallest k elements in the remainder.

Fewest subsets with sum less than N

I have a specific sub-problem for which I am having trouble coming up with an optimal solution. This problem is similar to the subset sum group of problems as well as space filling problems, but I have not seen this specific problem posed anywhere. I don't necessarily need the optimal solution (as I am relatively certain it is NP-hard), but an effective and fast approximation would certainly suffice.
Problem: Given a list of positive valued integers find the fewest number of disjoint subsets containing the entire list of integers where each subset sums to less than N. Obviously no integer in the original list can be greater than N.
In my application I have many lists and I can concatenate them into columns of a matrix as long as they fit in the matrix together. For downstream purposes I would like to have as little "wasted" space in the resulting ragged matrix, hence the space filling similarity.
Thus far I am employing a greedy-like approach, processing from the largest integers down and finding the largest integer that fits into the current subset under the limit N. Once the smallest integer no longer fits into the current subset I proceed to the next subset similarly until all numbers are exhausted. This almost certainly does not find the optimal solution, but was the best I could come up with quickly.
BONUS: My application actually requires batches, where there is a limit on the number of subsets in each batch (M). Thus the larger problem is to find the fewest batches where each batch contains M subsets and each subset sums to less than N.
Straight from Wikipedia (with some bold amendments):
In the bin packing problem, objects [Integers] of different volumes [values] must be
packed into a finite number of bins [sets] or containers each of volume V [summation of the subset < V] in
a way that minimizes the number of bins [sets] used. In computational
complexity theory, it is a combinatorial NP-hard problem.
https://en.wikipedia.org/wiki/Bin_packing_problem
As far as I can tell, this is exactly what you are looking for.

data structure similar to array but supporting deletion

I am thinking of the following data structure question:
given integers between 1 and n in sorted order, every operation queries and then removes (in a single call) kth smallest number. How to make the query and removal both constant time operations?
It is similar to an array structure but requiring constant removing. Though an order balanced binary tree can do this, but it is O(lg n) complexity.
Can one take the advantage of the range property (numbers only between 1 and n) to make it work?
LinkedHashSet is what you are looking for . If you want index as in arrays then use this LinkedHashMap. But you need to insert them in order from 1 ton
What is the maximal value of N? You mentioned that you are going to work with positive numbers - Van Emde Boas tree probably the best choice for you.
Short description:
- allows to store only positive numbers from [0,2^k), where k is is a number of bits required to store maximal number N. - all operations (insert,delete,lookup,find_next,find_prev) works in log(K).Not log(N). So, for integer 32-bit numbers complexity is log(32)=5
- disadvantage is memory consumption. requires 2^k ~ O(N) memory, so for storing integers you need ~1GB RAM. Remember, that usually O(N) memory means O(number of elements) but here it means O(maximal stored value).
Note: I'm not sure about supporting k-th element query but description looks nice:
FindNext: find the key/value pair with the smallest key at least a
given k
FindPrevious: find the key/value pair with the largest key at most a
given k
UPDATE
As Dukeling mentioned below, K-th element query is not supported. I see the only way to implement it.
int x = getMin();
for(int i=0;i<k-1;i++) x = getNext(x);
after this loop x will store k-th element. But complexity is O(K*log(bits)). Too bad for large values of K(

Searching for a tuple with all elements greater than a given tuple efficiently

Consider the following list of tuples:
[(5,4,5), (6,9,6), (3,8,3), (7,9,8)]
I am trying to devise an algorithm to check whether there exists at least one tuple in the list where all elements of that tuple are greater than or equal to a given tuple (the needle).
For example, for a given tuple (6,5,7), the algorithm should return True as every element in the given tuple is less than the last tuple in the list, i.e. (7,9,8). However, for a given tuple (9,1,9), the algorithm should return False as there is no tuple in the list where each element is greater than the given tuple. In particular, this is due to the second element 1 of the given tuple, which is smaller than the second element of all tuple in the list.
A naive algorithm would loop through the tuple in the list one by one, and loop through the the element of the tuple in the inner loop. Assuming there are n tuples, where each tuple have m elements, this will give a complexity of O(nm).
I am thinking whether it would be possible to have an algorithm to produce the task with a lower complexity. Pre-processing or any fancy data-structure to store the data is allowed!
My original thought was to make use of some variant of binary search, but I can't seem to find a data structure that allow us to not fall back to the naive solution once we have eliminated some tuples based on the first element, which implies that this algorithm could potentially be O(nm) at the end as well.
Thanks!
Consider the 2-tuple version of this problem. Each tuple (x,y) corresponds to an axis-aligned rectangle on the plane with upper right corner at (x,y) and lower right at (-oo,+oo). The collection corresponds to the union of these rectangles. Given a query point (needle), we need only determine if it's in the union. Knowing the boundary is sufficient for this. It's an axis-aligned polyline that's monotonically non-increasing in y with respect to x: a "downward staircase" in the x direction. With any reasonable data structure (e.g. an x-sorted list of points on the polyline), it's simple to make the decision in O(log n) time for n rectangles. It's not hard to see how to construct the polyline in O(n log n) time by inserting rectangles one at a time, each with O(log n) work.
Here's a visualization. The four dots are input tuples. The area left and below the blue line corresponds to "True" return values:
Tuples A, B, C affect the boundary. Tuple D doesn't.
So the question is whether this 2-tuple version generalizes nicely to 3. The union of semi-infinite axis-aligned rectangles becomes a union of rectangular prisms instead. The boundary polyline becomes a 3d surface.
There exist a few common ways to represent problems like this. One is as an octree. Computing the union of octrees is a well-known standard algorithm and fairly efficient. Querying one for membership requires O(log k) time where k is the biggest integer coordinate range contained in it. This is likely to be the simplest option. But octrees can be relatively slow and take a lot of space if the integer domain is big.
Another candidate without these weaknesses is a Binary Space Partition, which can handle arbitrary dimensions. BSPs use (hyper)planes of dimension n-1 to recursively split n-d space. A tree describes the logical relationship of the planes. In this application, you'll need 3 planes per tuple. The intersection of the "True" half-spaces induced by by the planes will be the True semi-infinite prism corresponding to the tuple. Querying a needle is traversing the tree to determine if you're inside any of the prisms. Average case behavior of BSPs is very good, but worst case size of the tree is terrible: O(n) search time over a tree of size O(2^n). In real applications, tricks are used to find BSPs of modest size at creation time, starting with randomizing insertion order.
K-d trees are another tree-based space partitioning scheme that could be adapted to this problem. This will take some work, though, because most presentations of k-d trees are concerned with searching for points, not representing regions. They'd have the same worst case behavior as BSPs.
The other bad news is that these algorithms aren't well-suited to tuples much bigger than 3. Trees quickly become too big. Searching high dimensional spaces is hard and a topic of active research. However, since you didn't say anything about tuple length, I'll stop here.
This kind of problem is addressed by spatial indexing systems. There are many data structures that allow your query to be executed efficiently.
Let S be a topologically-sorted copy of the original set of n each m-tuples. Then we can use binary search for any test tuple in S, at a cost of O(m ln n) per search (due to at most lg n search plies with at most m comparisons per ply).
Note, suppose there exist tuples P, Q in S such that P ≤ Q (that is, no element of Q is smaller than the corresponding element of P). Then tuple Q can be removed from S. In practice this often might cut the size of S to a small multiple of m, which would give O(m ln m) performance; but in the worst case, will provide no reduction at all.
Trying to answer
allcorrespondingelements greater than or equal to a given tuple (needle)
(using y and z for members of the set/hay stack, x for the query tuple/needle and x ll y when xₐ ≤ yₐ for all ₐ (x dominated by y))
compute telling summary information like min, sum and max of all tuple elements
order criteria by selectivity
weed out dominated tuples
build a k-d-tree
top off with lower and upper bounding boxes:
one tuple lower consisting of the minimum values for each element (if lower dominates x return True)
and upper consisting of the minimum values: return False if x dominates upper

Range query for a semigroup operator (union)

I'm looking to implement an algorithm, which is given an array of integers and a list of ranges (intervals) in that array, returns the number of distinct elements in each interval. That is, given the array A and a range [i,j] returns the size of the set {A[i],A[i+1],...,A[j]}.
Obviously, the naive approach (iterate from i to j and count ignoring duplicates) is too slow. Range-Sum seems inapplicable, since A U B - B isn't always equal to B.
I've looked up Range Queries in Wikipedia, and it hints that Yao (in '82) showed an algorithm that does this for semigroup operators (which union seems to be) with linear preprocessing time and space and almost constant query time. The article, unfortunately, is not available freely.
Edit: it appears this exact problem is available at http://www.spoj.com/problems/DQUERY/
There's rather simple algorithm which uses O(N log N) time and space for preprocessing and O(log N) time per query. At first, create a persistent segment tree for answering range sum query(initially, it should contain zeroes at all the positions). Then iterate through all the elements of the given array and store the latest position of each number. At each iteration create a new version of the persistent segment tree putting 1 to the latest position of each element(at each iteration the position of only one element can be updated, so only one position's value in segment tree changes so update can be done in O(log N)). To answer a query (l, r) You just need to find sum on (l, r) segment for the version of the tree which was created when iterating through the r's element of the initial array.
Hope this algorithm is fast enough.
Upd. There's a little mistake in my explanation: at each step, at most two positions' values in the segment tree might change(because it's necessary to put 0 to a previous latest position of a number if it's updated). However, it doesn't change the complexity.
You can answer any of your queries in constant time by performing a quadratic-time precomputation:
For every i from 0 to n-1
S <- new empty set backed by hashtable;
C <- 0;
For every j from i to n-1
If A[j] does not belong to S, increment C and add A[j] to S.
Stock C as the answer for the query associated to interval i..j.
This algorithm takes quadratic time since for each interval we perform a bounded number of operations, each one taking constant time (note that the set S is backed by a hashtable), and there's a quadratic number of intervals.
If you don't have additional information about the queries (total number of queries, distribution of intervals), you cannot do essentially better, since the total number of intervals is already quadratic.
You can trade off the quadratic precomputation by n linear on-the-fly computations: after receiving a query of the form A[i..j], precompute (in O(n) time) the answer for all intervals A[i..k], k>=i. This will guarantee that the amortized complexity will remain quadratic, and you will not be forced to perform the complete quadratic precomputation at the beginning.
Note that the obvious algorithm (the one you call obvious in the statement) is cubic, since you scan every interval completely.
Here is another approach which might be quite closely related to the segment tree. Think of the elements of the array as leaves of a full binary tree. If there are 2^n elements in the array there are n levels of that full tree. At each internal node of the tree store the union of the points that lie in the leaves beneath it. Each number in the array needs to appear once in each level (less if there are duplicates). So the cost in space is a factor of log n.
Consider a range A..B of length K. You can work out the union of points in this range by forming the union of sets associated with leaves and nodes, picking nodes as high up the tree as possible, as long as the subtree beneath those nodes is entirely contained in the range. If you step along the range picking subtrees that are as big as possible you will find that the size of the subtrees first increases and then decreases, and the number of subtrees required grows only with the logarithm of the size of the range - at the beginning if you could only take a subtree of size 2^k it will end on a boundary divisible by 2^(k+1) and you will have the chance of a subtree of size at least 2^(k+1) as the next step if your range is big enough.
So the number of semigroup operations required to answer a query is O(log n) - but note that the semigroup operations may be expensive as you may be forming the union of two large sets.

Resources