How to choose group of numbers in the vector - algorithm

I have an application with some probabilities of measured features. I want to select n-best features from vector. I have a vector of real numbers. Vector is normalized, sum of all numbers is 1 (it is probability of some features).
I want to select group of n less than N (assume approx. 8) largest numbers. Numbers has to be close together without gaps and they're also should have large sum (sum of remaining numbers should be several times lower).
Any ideas how to accomplish that?
I tried to use 80% quantile (but it is not sensitive to relative large gaps like [0.2, 0.2, 0.01, 0.01, 0.001, 0.001 ... len ~ 100] ), I tried a some treshold between two successive numbers, but nothing work too good.
I have some partial solution at this moment but I am just wondering if there is some simple solution that I have overlooked.

John's answer is good. Also you might try
sort the probabilities
find the largest gap between successive probabilities
work up from there
From there, it's starting to sound like a pattern-recognition problem.My favorite method is markov-chain-monte-carlo(MCMC).
Edit: Since you clarified your question, my first thought is, since you only have 8 possible answers, develop a score for each one, based on how much probability it contains and whether or not it splits at a gap, and make a heuristic judgement.
Further edit: This sounds a bit like logistic regression. You want to find a value of P that effectively divides your set into members and non-members. For a given value of P, you can compute a log-likelihood for the ensemble, and choose P that maximizes that.

It sounds like you're wanting to select the n largest probabilities but the number n is flexible. If n were fixed, say n=10, you could just sort your vector and pull out the top 10 items. But from your example it sounds like you'd like to use a smaller value of n if there's a natural break in the data. Maybe you want to start with the largest probability and go down the list selecting items until the sum of the probabilities you pick crosses some threshold.
Maybe you have an implicit optimization problem where you want to maximize some probability with some penalty for large n. Try stating your problem that way. You might find your own answer, or you might be able to rephrase your question here in a way that helps other people give you a better answer.

I'm not really sure if this is what you want, but it seems you want to do the following.
Lets assume that the probabilities are x_1,...,x_N in increasing order. Then you should try to find 1<= i < j <= N such that the function
f(i,j) = (x_i + x_(i+1) + ... + x_j)/(x_j - x_i)
is maximized. This can be done naively in quadratic time.

Related

Find minimum steps to convert all elements to zero

You are given an array of positive integers of size N. You can choose any positive number x such that x<=max(Array) and subtract it from all elements of the array greater than and equal to x.
This operation has a cost A[i]-x for A[i]>=x. The total cost for a particular step is the
sum(A[i]-x). A step is only valid if the sum(A[i]-x) is less than or equal to a given number K.
For all the valid steps find the minimum number of steps to make all elements of the array zero.
0<=i<10^5
0<=x<=10^5
0<k<10^5
Can anybody help me with any approach? DP will not work due to high constraints.
Just some general exploratory thoughts.
First, there should be a constraint on N. If N is 3, this is much easier than if it is 100. The naive brute force approach is going to be O(k^N)
Next, you are right that DP will not work with these constraints.
For a greedy approach, I would want to minimize the number of distinct non-zero values, and not maximize how much I took. Our worst case approach is take out the largest each time, for N steps. If you can get 2 pairs of entries to both match, then that shortened our approach.
The obvious thing to try if you can is an A* search. However that requires a LOWER bound (not upper). The best naive lower bound that I can see is ceil(log_2(count_distinct_values)). Unless you're incredibly lucky and the problem can be solved that quickly, this is unlikely to narrow your search enough to be helpful.
I'm curious what trick makes this problem actually doable.
I do have an idea. But it is going to take some thought to make it work. Naively we want to take each choice for x and explore the paths that way. And this is a problem because there are 10^5 choices for x. After 2 choices we have a problem, and after 3 we are definitely not going to be able to do it.
BUT instead consider the possible orders of the array elements (with ties both possible and encouraged) and the resulting inequalities on the range of choices that could have been made. And now instead of having to store a 10^5 choices of x we only need store the distinct orderings we get, and what inequalities there are on the range of choices that get us there. As long as N < 10, the number of weak orderings is something that we can deal with if we're clever.
It would take a bunch of work to flesh out this idea though.
I may be totally wrong, and if so, please tell me and I'm going to delete my thoughts: maybe there is an opportunity if we translate the problem into another form?
You are given an array A of positive integers of size N.
Calculate the histogram H of this array.
The highest populated slot of this histogram has index m ( == max(A)).
Find the shortest sequence of selections of x for:
Select an index x <= m which satisfies sum(H[i]*(i-x)) <= K for i = x+1 .. m (search for suitable x starts from m down)
Add H[x .. m] to H[0 .. m-x]
Set the new m as the highest populated index in H[0 .. x-1] (we ignore everything from H[x] up)
Repeat until m == 0
If there is only a "good" but not optimal solution sought for, I could imagine that some kind of spectral analysis of H could hint towards favorable x selections so that maxima in the histogram pile upon other maxima in the reduction step.

Algorithm to optimally group list of values

I have several numbers. I need to group them in several groups, so that sums of all numbers in one group are between predefined min and max. The point is to left as few numbers ungrouped as possible.
Input:
min, max: range for sum of numbers
N1, N2, N3 ... Ni: numbers to group
Output:
[N1,N3,N5],[Ni,Nj,Nk,Nm...]...: groups where sum of numbers is between min and max
Na,Nb,Nc...: numbers, left ingrouped.
This problem could be viewed as bin packing into bins of size max, with a funny objective: minimize the number of items not packed into bins holding at least min. One idea from the bin-packing literature is that the "small" items (in this case, items that are small relative to max - min) are easy to pack but are accountable for most of the combinatorial explosion of possibilities. Thus some approximation algorithms for bin packing do something clever for big items and then fill in with the small. Another way to reduce the number of possibilities is to round the numbers to belong to a smaller set. It's somewhat obvious how to do that for bin packing (round up), but it's not clear what to do for this problem.
Okay, I'll give an example of how these ideas could be instantiated. Suppose that max = 1 and min = 1/2. Let's try to find a solution that's competitive with the optimum for when max = 2 and min = 1/2. (That may sound terrible, but this sort of approximation guarantee where OPT is held to higher standards is sometimes used in the literature.)
First round every item's size up to a power of 2. Very large items, of size 4 or greater, can't be packed. Large items, of size 2 or 1 or 1/2, are given their own bins. Small items, of size 1/4 or less, are dealt with as follows. Whenever two items of size 1/4 or less have the same size, combine them into one super-item. Pack all of the new items of size 1/2 into their own bins. The remainder has total size less than 1/2. If there is space in another bin, put them there. Otherwise, give them their own bin.
The quality of the resulting solution for max = 2 is at least as good as the quality of OPT for max = 1. Take the optimal solution for max = 1 and round the item sizes. The set of bad bins remains the same, because no item is smaller, and each bin stores less than 2 because each item is less than twice as large as it used to be. Now it suffices to show that the packing algorithm I gave for powers of 2 is optimal. I'll leave that as an exercise.
I don't expect this instantly to generalize into a full algorithm. I have to get back to work, but the approach I would take would be to force OPT to deal with max = 1 while ALG gets to use max = 1 + epsilon, substitute powers of (1 + epsilon) for powers of two in the rounding step, and then figure out how to pack the small items, probably using a dynamic program since greed likely won't work.
If you're not worried about efficiency, simply generate each possible grouping and choose the one that is correct and optimal in the sense you describe. Clearly, this works for any finite list of numbers (and is, by definition, optimal).
If efficiency is desired, the problem seems to become somewhat more difficult. :D I'll keep thinking.
EDIT: Come to think of it, this problem seems at least as hard as "subset sum" and, as such, I don't think there is a solution significantly better than the one I give (i.e., no known polynomial-time algorithm can solve it, if it is NP-Hard.

finding smallest scale factor to get each number within one tenth of a whole number from a set of doubles

Suppose we have a set of doubles s, something like this:
1.11, 1.60, 5.30, 4.10, 4.05, 4.90, 4.89
We now want to find the smallest, positive integer scale factor x that any element of s multiplied by x is within one tenth of a whole number.
Sorry if this isn't very clear—please ask for clarification if needed.
Please limit answers to C-style languages or algorithmic pseudo-code.
Thanks!
You're looking for something called simultaneous Diophantine approximation. The usual statement is that you're given real numbers a_1, ..., a_n and a positive real epsilon and you want to find integers P_1, ..., P_n and Q so that |Q*a_j - P_j| < epsilon, hopefully with Q as small as possible.
This is a very well-studied problem with known algorithms. However, you should know that it is NP-hard to find the best approximation with Q < q where q is another part of the specification. To the best of my understanding, this is not relevant to your problem because you have a fixed epsilon and want the smallest Q, not the other way around.
One algorithm for the problem is (Lenstra–Lenstra)–Lovász's lattice reduction algorithm. I wonder if I can find any good references for you. These class notes mention the problem and algorithm, but probably aren't of direct help. Wikipedia has a fairly detailed page on the algorithm, including a fairly large list of implementations.
To answer Vlad's modified question (if you want exact whole numbers after multiplication), the answer is known. If your numbers are rationals a1/b1, a2/b2, ..., aN/bN, with fractions reduced (ai and bi relatively prime), then the number you need to multiply by is the least common multiple of b1, ..., bN.
This is not a full answer, but some suggestions:
Note: I'm using "s" for the scale factor, and "x" for the doubles.
First of all, ask yourself if brute force doesn't work. E.g. try s = 1, then s = 2, then s = 3, and so forth.s
We have a list of numbers x[i], and a tolerance t = 1/10. We want to find the smallest positive integer s, such that for each x[i], there is an integer q[i] such that |s * x[i] - q[i]| < t.
First note that if we can produce an ordered list for each x[i], it's simple enough to merge these to find the smallest s that will work for all of them. Secondly note that the answer depends only on the fractional part of x[i].
Rearranging the test above, we have |x - q/s| < t/s. That is, we want to find a "good" rational approximation for x, in the sense that the approximation should be better than t/s. Mathematicians have studied a variant of this where the criterion for "good" is that it has to be better than any with a smaller "s" value, and the best way to find these is through truncations of the continued fraction expansion.
Unfortunately, this isn't quite what you need, since once you get under your tolerance, you don't necessarily need to continue to get increasingly better -- the same tolerance will work. The next obvious thing is to use this to skip to the first number that would work, and do brute force from there. Unfortunately, for any number the largest the first s can be is 5, so that doesn't buy you all that much. However, this method will find you an s that works, just not the smallest one. Can we use this s to find a smaller one, if it exists? I don't know, but it'll set an upper limit for brute forcing.
Also, if you need the tolerance for each x to be < t, than this means the tolerance for the product of all x must be < t^n. This might let you skip forward a great deal, and set a reasonable lower limit for brute forcing.

Bin-packing (or knapsack?) problem

I have a collection of 43 to 50 numbers ranging from 0.133 to 0.005 (but mostly on the small side). I would like to find, if possible, all combinations that have a sum between L and R, which are very close together.*
The brute-force method takes 243 to 250 steps, which isn't feasible. What's a good method to use here?
Edit: The combinations will be used in a calculation and discarded. (If you're writing code, you can assume they're simply output; I'll modify as needed.) The number of combinations will presumably be far too large to hold in memory.
* L = 0.5877866649021190081897311406, R = 0.5918521703507438353981412820.
The basic idea is to convert it to an integer knapsack problem (which is easy).
Choose a small real number e and round numbers in your original problem to ones representable as k*e with integer k. The smaller e, the larger the integers will be (efficiency tradeoff) but the solution of the modified problem will be closer to your original one. An e=d/(4*43) where d is the width of your target interval should be small enough.
If the modified problem has an exact solution summing to the middle (rounded to e) of your target interval, then the original problem has one somewhere within the interval.
You haven't given us enough information. But it sounds like you are in trouble if you actually want to OUTPUT every possible combination. For example, consistent with what you told us are that every number is ~.027. If this is the case, then every collection of half of the elements with satisfy your criterion. But there are 43 Choose 21 such sets, which means you have to output at least 1052049481860 sets. (too many to be feasible)
Certainly the running time will be no better than the length of the required output.
Actually, there is a quicker way around this:
(python)
sums_possible = [(0, [])]
# sums_possible is an array of tuples like this: (number, numbers_that_yield_this_sum_array)
for number in numbers:
sums_possible_for_this_number = []
for sum in sums_possible:
sums_possible_for_this_number.insert((number + sum[0], sum[1] + [number]))
sums_possible = sums_possible + sums_possible_for_this_number
results = [sum[1] for sum in sums_possible if sum[0]>=L and sum[1]<=R]
Also, Aaron is right, so this may or may not be feasible for you

How to divide a set of values into two sets of fixed size, such that their sums approach a particular value

I have a set of 18 values (it will always be 18) which I need to distribute into two sets, one of 10 items, and one of 8 items.
The rule for distribution is that the values of each set must be equal (or as close as possible) to a particular known value - so in the first set the sum of the values must be as close as possible to 1500000 and in the second set the sum iof the values must be as close as possible to 1000000.
What is the best (and that may mean simplest) algorithm to do this?
Further clarification, the values all range between 110000 and 200000. The values are always multiples of a 100 and are all positive integers, and there can be duplicates.
There are only 43758 such selections. Go through each of them and find the best.
It is an optimization problem. Here you have two optimization criteria, which should be combine to single one. For example like this:
F(A, B) = w1*abs(sum(A) - 1500000) + w2*abs(sum(B) - 1000000)
where A and B your sets, sum() is a sum of elements in a set, and w1 and w2 is weights.
Then you should find a strategy for iteration over possible combinations. The simpliest strategy is to find all 10-combinations of 18, and select that one which minimize F(A,B). There are C(18,10) = 43758 combinations.
While brute force is probably best for this problem size, there are other tricks you can play if you're willing to get an approximate solution or if the brute force method is still too expensive. the basic idea is to snap the values to a small grid, and then do brute force on the (much smaller) set of entries in the grid.
in your case, (pretending I've already divided by 100), all numbers are between 1100 and 2000, so you can "snap" them to the 10 integers 1100, 1200 and so on. The maximum error in doing is at most 50/1100 which is less than 5%. Now you've halved the input size, which makes the brute force run a bit faster.
Again, I wouldn't recommend this unless (a) brute force is really slow right now or (a) the problem size increases beyond 18.
p.s the problem is called SUBSET SUM (or sometimes KNAPSACK depending on the formulation) and is NP-complete. Here's a reference for the approximation idea.
Your problem, as stated, is np unless there is a pattern to the data.
The only way to achieve the best answer is find all permutations of 18 into 10 and 8 and associated sums. Weight according to your preference.
Looks like an optimization problem to me. Randomly separate the values into the two sets, and then start swapping values (use a good heuristic), and accept the change if the result is better.

Resources