k-size possible number combinations ordered by each sum - performance

Given a set of n numbers; What is the code that generate all possible k-size subsets in descending order (decreasing each sum of values)?
Example:
Set={9,8,6,2,1} => n=5 and k=3. So the output is:
[9,8,6]
[9,8,2]
[9,8,1]
[9,6,2]
[9,6,1]
[8,6,2]
[8,6,1]
[9,2,1]
[8,2,1]
[6,2,1]
It is preferred the most efficient algorithm, but the algorithm with NP-Complete complexity (n choose k permutations) is the answer yet.
One-by-one generation in the Matlab Code is preferred for implementation. Or a solution that the maximum size of the ordered list in it can be determined (by this, for greater n and k, one may use an approximation and return specific size of this list without computing all possibilities).
Note: 1)Please give attention to the position of [9,2,1] in this ordered list. So index ordering is not the correct answer.
2)This may be a type of Lexicographical order.

Thanks to Divakar, Yvon, and Luis, one of the possible answers to this question:
There are sorted set combinations in the SSC, so
combs = nchoosek(Set,k);
[~,ind] = sort(sum(combs,2),'descend');
SSC = combs(ind,:);
if you want the index of each number array in the Set (has unique numbers), with num_arr index in SSC use this code
for i=1:k
Index(i)=find(SSC(num_arr,j)==Set(1,:));
end
this code returns [1,3,5] for [9,6,1] in Index.
for greater n
In this case, the computation is very time-consuming or even is impractical. An approximation may solves this issue, for such situations, you can find the first arbitrary answer by modifying the nchoosek.m in the Matlab.

Related

Giving a set of tuples (value,cost),Is there an algorithm to find the combination of tuple that have the least cost for storing given number

I have a set of (value,cost) tuples which is (2000000,200) , (500000,75) , (100000,20)
Suppose X is any positive number.
Is there an algorithm to find the combination of tuple that have the least cost for the sum of value that can store X.
The sum of tuple values can be equal or greater than the given X
ex.
giving x = 800000 the answer should be (500000,75) , (100000,20) , (100000,20) , (100000,20)
giving x = 900000 the answer should be (500000,75) , (500000,75)
giving x = 1500000 the answer should be (2000000,200)
I can hardcode this but the set and the tuple are subject to change so if this can be substitute with well-known algorithm it would be great.
This can be solved with dinamic programming, as you have no limit on number of tuples and can afford higher sums that provided number.
First, you can optimize tuples. If one big tuple can be replaced by number of smaller ones with equal or lower cost and equal or higher value, you can remove bigger tuple at all.
Also, it's fruitful for future use to order tuples in optimized set by value/cost in descending order. Tuple is better if value/cost is bigger.
Time complexity O(N*T), where N is number divided by common factor (F) of optimized tuple values, and T is number of tuples in optimized tuple set.
Memory complexity O(N).
Set up array a of size N that will contain:
in a[i].cost best cost for solution for i*F, 0 for special case "no solution yet"
in a[i].tuple the tuple that led to best solution
Recursion scheme:
function gets n as a single parameter - it's provided number/F for start, leftover of needed value/F sums for recusion calls
if array a for n is filled, return a[n].cost
otherwise set current_cost to MAXINT
for each tuple from best to worst try to add it to solution:
if value/F >= n, we've got some solution, compare tuple cost to current_cost and if it's better, update a[n].cost and a[n].tuple
if value/F < n, call recursively for n-value/F and compare cost with current solution, update current solution and a[n].cost, a[n].tuple if needed
after all, return a[n].cost or throw exception is no solution exists
Tuple list can be retrieved from a but traverse through .tuple on each step.
It's possible to reduce overall array size down to max(tuple.value/F), but you'll have to save more or less complete solution instead of one best .tuple for each element, and you'll have to make "sliding window" carefully.
It's possible to turn recursion into cycle from 0 to n, as with many other dynamic programming algorithms.

Algorithm to generate k element subsets in order of their sum

If I have an unsorted large set of n integers (say 2^20 of them) and would like to generate subsets with k elements each (where k is small, say 5) in increasing order of their sums, what is the most efficient way to do so?
Why I need to generate these subsets in this fashion is that I would like to find the k-element subset with the smallest sum satisfying a certain condition, and I thus would apply the condition on each of the k-element subsets generated.
Also, what would be the complexity of the algorithm?
There is a similar question here: Algorithm to get every possible subset of a list, in order of their product, without building and sorting the entire list (i.e Generators) about generating subsets in order of their product, but it wouldn't fit my needs due to the extremely large size of the set n
I intend to implement the algorithm in Mathematica, but could do it in C++ or Python too.
If your desired property of the small subsets (call it P) is fairly common, a probabilistic approach may work well:
Sort the n integers (for millions of integers i.e. 10s to 100s of MB of ram, this should not be a problem), and sum the k-1 smallest. Call this total offset.
Generate a random k-subset (say, by sampling k random numbers, mod n) and check it for P-ness.
On a match, note the sum-total of the subset. Subtract offset from this to find an upper bound on the largest element of any k-subset of equivalent sum-total.
Restrict your set of n integers to those less than or equal to this bound.
Repeat (goto 2) until no matches are found within some fixed number of iterations.
Note the initial sort is O(n log n). The binary search implicit in step 4 is O(log n).
Obviously, if P is so rare that random pot-shots are unlikely to get a match, this does you no good.
Even if only 1 in 1000 of the k-sized sets meets your condition, That's still far too many combinations to test. I believe runtime scales with nCk (n choose k), where n is the size of your unsorted list. The answer by Andrew Mao has a link to this value. 10^28/1000 is still 10^25. Even at 1000 tests per second, that's still 10^22 seconds. =10^14 years.
If you are allowed to, I think you need to eliminate duplicate numbers from your large set. Each duplicate you remove will drastically reduce the number of evaluations you need to perform. Sort the list, then kill the dupes.
Also, are you looking for the single best answer here? Who will verify the answer, and how long would that take? I suggest implementing a Genetic Algorithm and running a bunch of instances overnight (for as long as you have the time). This will yield a very good answer, in much less time than the duration of the universe.
Do you mean 20 integers, or 2^20? If it's really 2^20, then you may need to go through a significant amount of (2^20 choose 5) subsets before you find one that satisfies your condition. On a modern 100k MIPS CPU, assuming just 1 instruction can compute a set and evaluate that condition, going through that entire set would still take 3 quadrillion years. So if you even need to go through a fraction of that, it's not going to finish in your lifetime.
Even if the number of integers is smaller, this seems to be a rather brute force way to solve this problem. I conjecture that you may be able to express your condition as a constraint in a mixed integer program, in which case solving the following could be a much faster way to obtain the solution than brute force enumeration. Assuming your integers are w_i, i from 1 to N:
min sum(i) w_i*x_i
x_i binary
sum over x_i = k
subject to (some constraints on w_i*x_i)
If it turns out that the linear programming relaxation of your MIP is tight, then you would be in luck and have a very efficient way to solve the problem, even for 2^20 integers (Example: max-flow/min-cut problem.) Also, you can use the approach of column generation to find a solution since you may have a very large number of values that cannot be solved for at the same time.
If you post a bit more about the constraint you are interested in, I or someone else may be able to propose a more concrete solution for you that doesn't involve brute force enumeration.
Here's an approximate way to do what you're saying.
First, sort the list. Then, consider some length-5 index vector v, corresponding to the positions in the sorted list, where the maximum index is some number m, and some other index vector v', with some max index m' > m. The smallest sum for all such vectors v' is always greater than the smallest sum for all vectors v.
So, here's how you can loop through the elements with approximately increasing sum:
sort arr
for i = 1 to N
for v = 5-element subsets of (1, ..., i)
set = arr{v}
if condition(set) is satisfied
break_loop = true
compute sum(set), keep set if it is the best so far
break if break_loop
Basically, this means that you no longer need to check for 5-element combinations of (1, ..., n+1) if you find a satisfying assignment in (1, ..., n), since any satisfying assignment with max index n+1 will have a greater sum, and you can stop after that set. However, there is no easy way to loop through the 5-combinations of (1, ..., n) while guaranteeing that the sum is always increasing, but at least you can stop checking after you find a satisfying set at some n.
This looks to be a perfect candidate for map-reduce (http://en.wikipedia.org/wiki/MapReduce). If you know of any way of partitioning them smartly so that passing candidates are equally present in each node then you can probably get a great throughput.
Complete sort may not really be needed as the map stage can take care of it. Each node can then verify the condition against the k-tuples and output results into a file that can be aggregated / reduced later.
If you know of the probability of occurrence and don't need all of the results try looking at probabilistic algorithms to converge to an answer.

finding max value on each subset

(I'm banging my head here. Let X={x1,x2,...,xn} is an integer set. Let A1,A2,...Am be the m subsets of X. For any i and j, Ai and Aj are not necessarily disjoint. Now the goal is to find the maximal value on each Ai (i=1,...,m) efficiently, with the number of operations as fewer as possible.
For example, given X={2,4,6,3,1}, and its subsets A1={2,3,1}, A2={2,6,3,1}, A3={4,2,3,1}. We need to find Max{A1}, Max{A2}, Max{A3}, respectively.
The brute-force way for finding Max{A1}, Max{A2}, Max{A3} is to scan all the elements in each Ai, and (m*d) operations are required, with m the number of subsets of X, and d the average length of the subsets {Ai} of X.
Now, I have some observations:
(1) For any set Y⊆X, max{Y}≤max{X},
For instance, since Max{X}=6 and 6 is in A2, then Max{A2}=6 can be found directly.
(2) For any two sets A and B, if A∩B is non-empty, Max{A} and Max{B} can be identified as follows:
First, we find the common parts between A and B, deonted as c=max{A∩B}.
Then, we find Max{A}=Max{Max{A-(A∩B)}, c} and Max{B}=Max{Max{B-(A∩B)}, c}.
I am not sure whether there are some other interesting obervations for find these max values.
Any ideas are warmly welcome!
My question is what if for the general case when X={x1,x2,...,xn} and there are m subsets of X, denoted as A1,A2,...Am, is there some more efficient techniques to find such max values Max{Ai} (i=1,...,m) ?
Your help will be highly appreciated!
There is no method asymptotically better than brute force, assuming a typical representation of the given sets. Simply scanning through the sets to find the largest member of each requires linear time and linear time is optimal since every member of the set must be read in order to determine the maximum value.
Now if the input representation is not simply a listing of the elements in each set, than other bounds and algorithms may apply. For example, if we know the input sets are sorted and the length of the set is given as part of the input, we can obviously find the maximum elements in time linear only on the number of subsets but not on their length.
If your sets are implemented in a hash (or, more generally, if you can otherwise check for the presence of a value in the set in O(1) time) you can improve on a brute-force approach.
Instead of iterating through the elements of the subset and maintaining the maximum, iterate over the elements of the parent set in descending order, checking for the presence of those elements in the subset. The first found element is necessarily the subset's maximum. Technically, this still takes O(n) time (n = subset carnality) in the general case, but will generally carry a great performance benefit in practice. (If you have any data regarding the number and size of the subsets, and they favor this approach, you can improve on O(n) in the average case.)
This approach requires sorting of the parent set's elements (n log n), however, so it may only be worthwhile if the number of subsets is much greater than the carnality of the parent set.

Efficient algorithm for finding a set of non adjacent subarrays maximizing their total sum

I've come across this problem in a programming contest site and been trying different things for a few days but none of them seem to be efficient enough.
Here is the question: You are given a large array of integers and a number k. The goal is to divide the array into subarrays each containing no more than k elements, such that the sum of all the elements in all the sub arrays is maximal. Another condition is that none of these sub arrays can be adjacent to each other. In other words, we have to drop a few terms from the original array.
Its been bugging me for a while and would like to hear your perspective on approaching this problem.
Dynamic programming should do the trick. Short explanation why:
The key property of a problem susceptible to dynamic programming is that the optimal solution to the problem (here: the whole array) can always be expressed as composition of two optimal solutions to subproblems (here: two subarrays.) Not every split needs to have this property - it is sufficient for one such split to exist for any optimal solution.
Clearly if you split the optimal solution between arrays (on an element that has been dropped), then the subsolutions are optimal within both subarrays.
The algorithm:
Try every element of the array in turn as the splitting element, looking for the one that yields the best result. Solve the problem recursively for both parts of the array (the recursion stops when the subarray is no longer than k). Memoize solutions to avoid exponential time (the recursion will obviously try the same subarray many times.)
This is not a solution, but a clue.
Consider solving the following problem:
From an array X choose elements a subset of elements such that none of them are adjacent to each other and their sum is maximum.
Now, the above problem is a special case of your problem where K=1. Think how you can expand the solution to a general case. Let me know if you don't know how to solve the simpler case.
I don't have time to explain why this works and should be the accepted answer:
def maxK(a, k):
states = k+1
myList = [0 for i in range(states)]
for i in range(0, len(a)):
maxV = max (myList)
myList = [a[i] + j for j in myList]
myList[(states-i) % k] = maxV
return max(myList)
This works with negative numbers too. This is linear in size(a) times k. The language I used is Python because at this level it can be read as if it were pseudo code.

Efficiently selecting a set of random elements from a linked list

Say I have a linked list of numbers of length N. N is very large and I don’t know in advance the exact value of N.
How can I most efficiently write a function that will return k completely random numbers from the list?
There's a very nice and efficient algorithm for this using a method called reservoir sampling.
Let me start by giving you its history:
Knuth calls this Algorithm R on p. 144 of his 1997 edition of Seminumerical Algorithms (volume 2 of The Art of Computer Programming), and provides some code for it there. Knuth attributes the algorithm to Alan G. Waterman. Despite a lengthy search, I haven't been able to find Waterman's original document, if it exists, which may be why you'll most often see Knuth quoted as the source of this algorithm.
McLeod and Bellhouse, 1983 (1) provide a more thorough discussion than Knuth as well as the first published proof (that I'm aware of) that the algorithm works.
Vitter 1985 (2) reviews Algorithm R and then presents an additional three algorithms which provide the same output, but with a twist. Rather than making a choice to include or skip each incoming element, his algorithm predetermines the number of incoming elements to be skipped. In his tests (which, admittedly, are out of date now) this decreased execution time dramatically by avoiding random number generation and comparisons on each in-coming number.
In pseudocode the algorithm is:
Let R be the result array of size s
Let I be an input queue
> Fill the reservoir array
for j in the range [1,s]:
R[j]=I.pop()
elements_seen=s
while I is not empty:
elements_seen+=1
j=random(1,elements_seen) > This is inclusive
if j<=s:
R[j]=I.pop()
else:
I.pop()
Note that I've specifically written the code to avoid specifying the size of the input. That's one of the cool properties of this algorithm: you can run it without needing to know the size of the input beforehand and it still assures you that each element you encounter has an equal probability of ending up in R (that is, there is no bias). Furthermore, R contains a fair and representative sample of the elements the algorithm has considered at all times. This means you can use this as an online algorithm.
Why does this work?
McLeod and Bellhouse (1983) provide a proof using the mathematics of combinations. It's pretty, but it would be a bit difficult to reconstruct it here. Therefore, I've generated an alternative proof which is easier to explain.
We proceed via proof by induction.
Say we want to generate a set of s elements and that we have already seen n>s elements.
Let's assume that our current s elements have already each been chosen with probability s/n.
By the definition of the algorithm, we choose element n+1 with probability s/(n+1).
Each element already part of our result set has a probability 1/s of being replaced.
The probability that an element from the n-seen result set is replaced in the n+1-seen result set is therefore (1/s)*s/(n+1)=1/(n+1). Conversely, the probability that an element is not replaced is 1-1/(n+1)=n/(n+1).
Thus, the n+1-seen result set contains an element either if it was part of the n-seen result set and was not replaced---this probability is (s/n)*n/(n+1)=s/(n+1)---or if the element was chosen---with probability s/(n+1).
The definition of the algorithm tells us that the first s elements are automatically included as the first n=s members of the result set. Therefore, the n-seen result set includes each element with s/n (=1) probability giving us the necessary base case for the induction.
References
McLeod, A. Ian, and David R. Bellhouse. "A convenient algorithm for drawing a simple random sample." Journal of the Royal Statistical Society. Series C (Applied Statistics) 32.2 (1983): 182-184. (Link)
Vitter, Jeffrey S. "Random sampling with a reservoir." ACM Transactions on Mathematical Software (TOMS) 11.1 (1985): 37-57. (Link)
This is called a Reservoir Sampling problem. The simple solution is to assign a random number to each element of the list as you see it, then keep the top (or bottom) k elements as ordered by the random number.
I would suggest: First find your k random numbers. Sort them. Then traverse both the linked list and your random numbers once.
If you somehow don't know the length of your linked list (how?), then you could grab the first k into an array, then for node r, generate a random number in [0, r), and if that is less than k, replace the rth item of the array. (Not entirely convinced that doesn't bias...)
Other than that: "If I were you, I wouldn't be starting from here." Are you sure linked list is right for your problem? Is there not a better data structure, such as a good old flat array list.
If you don't know the length of the list, then you will have to traverse it complete to ensure random picks. The method I've used in this case is the one described by Tom Hawtin (54070). While traversing the list you keep k elements that form your random selection to that point. (Initially you just add the first k elements you encounter.) Then, with probability k/i, you replace a random element from your selection with the ith element of the list (i.e. the element you are at, at that moment).
It's easy to show that this gives a random selection. After seeing m elements (m > k), we have that each of the first m elements of the list are part of you random selection with a probability k/m. That this initially holds is trivial. Then for each element m+1, you put it in your selection (replacing a random element) with probability k/(m+1). You now need to show that all other elements also have probability k/(m+1) of being selected. We have that the probability is k/m * (k/(m+1)*(1-1/k) + (1-k/(m+1))) (i.e. probability that element was in the list times the probability that it is still there). With calculus you can straightforwardly show that this is equal to k/(m+1).
Well, you do need to know what N is at runtime at least, even if this involves doing an extra pass over the list to count them. The simplest algorithm to do this is to just pick a random number in N and remove that item, repeated k times. Or, if it is permissible to return repeat numbers, don't remove the item.
Unless you have a VERY large N, and very stringent performance requirements, this algorithm runs with O(N*k) complexity, which should be acceptable.
Edit: Nevermind, Tom Hawtin's method is way better. Select the random numbers first, then traverse the list once. Same theoretical complexity, I think, but much better expected runtime.
Why can't you just do something like
List GetKRandomFromList(List input, int k)
List ret = new List();
for(i=0;i<k;i++)
ret.Add(input[Math.Rand(0,input.Length)]);
return ret;
I'm sure that you don't mean something that simple so can you specify further?

Resources